One of the most important lessons in this seminar is that causality is much deeper than simply saying “X causes Y.” In research, especially Information Systems research, causality is often hidden inside our theories, models, assumptions, and explanations. Even when scholars avoid the word “cause,” they are usually still making causal claims. When we say technology affects performance, social media shapes mental health, IT investment improves efficiency, or users enact technology through practice, we are still saying that one thing helps bring about another thing.
The central message of this transcript is that researchers must become more explicit and careful about causality. If we do not understand the type of causality we are assuming, we can build weak theories, choose the wrong methods, confuse reviewers, and misrepresent what our studies actually show.
The Basic Difference Between Variance Models and Process Models
The first major distinction in the seminar is between variance models and process models.
A variance model explains change by looking at relationships between variables. It asks whether changes in one variable are associated with changes in another variable. For example, if IT investment increases, does organizational performance also increase? If system latency increases, does work efficiency decrease? This is the kind of logic behind regression, structural equation modeling, and many quantitative studies.
In a variance model, X is treated as both necessary and sufficient for Y. This means that when X changes, Y is expected to change as well. The causal claim is stronger because X is not just a prior condition; it is assumed to help produce the outcome.
A process model, by contrast, explains change through stages, phases, or sequences. It asks how something unfolds over time. In a process model, X is usually necessary but not sufficient for Y. This means Y cannot happen without X, but X alone does not guarantee that Y will happen.
The ceiling fan example makes this clear. If installing a fan requires five steps, step one is necessary for step two, step two is necessary for step three, and so on. But completing step two does not guarantee that step three will happen. You may stop, give up, or lack the right tool. So the earlier step is necessary, but not sufficient.
This is the heart of process thinking.
A process model does not say, “More X produces more Y.” It says, “For this later stage to occur, certain earlier stages must first happen.”
That distinction matters because many researchers mistakenly use variance logic when they are actually studying processes. For example, systems analysis and systems design are not best understood as a variance relationship. Systems analysis does not “increase” systems design in the way IT investment might increase performance. Instead, systems analysis is a prior phase. You cannot properly do design without some form of analysis, but analysis does not guarantee that design will happen. That makes it a process relationship.
Necessary and Sufficient Conditions
The seminar spends time clarifying necessary and sufficient conditions because they sit at the center of causal reasoning.
A necessary condition is something that must be present for an outcome to happen. Oxygen is necessary for fire. Without oxygen, fire cannot occur. But oxygen alone does not create fire. You also need heat, fuel, and other conditions. So oxygen is necessary but not sufficient.
A sufficient condition is something that guarantees the outcome. If something is a square, it is sufficient to say it is a rectangle because every square is a type of rectangle. Once you know it is a square, you know it must also be a rectangle.
This distinction helps clarify why process models are different from variance models. Process models usually involve necessary steps. Variance models are built around stronger claims where X is treated as a condition that helps produce Y in a predictable way.
This is why the professor says that if you remember one sentence, remember this:
Variance models are based on necessary and sufficient conditions; process models are based on necessary but not sufficient conditions.
Causality Is Not Directly Observed
The seminar then moves into a deeper philosophical point: causality itself is never directly observed.
This sounds strange at first because in everyday life we feel like we see causality all the time. A white ball hits a red ball, and the red ball moves. A knife cuts skin, and a wound appears. Fire touches paper, and the paper burns. These seem obviously causal.
But philosophically, what we actually observe is sequence and association. We observe that one event happens before another. We observe that the two events occur together. But the causal force connecting them is inferred, not directly seen.
This idea goes back to David Hume, who argued that humans never observe causality itself. We observe patterns. When we repeatedly see fire followed by burning, our mind creates the idea that fire causes burning. Causality becomes a mental structure we use to make sense of the world.
This matters for research because it means causality is always an inference. To make a strong causal claim, researchers typically need three things: covariation, temporal precedence, and control of alternative explanations.
Covariation means X and Y move together. Temporal precedence means X comes before Y. Control of alternatives means other possible causes have been ruled out. Without these, the researcher should be careful about using causal language.
That is why scholars often avoid saying “X causes Y” and instead say “X is associated with Y” or “X is related to Y.” A regression may show a relationship, but unless the design establishes timing and rules out alternative explanations, it does not prove causality.
Why Correlation Is Not Causation
The transcript gives classic examples of misleading correlations. People who drive orange cars may have fewer accidents, but that does not mean orange paint causes safer driving. It may be that people who buy unusual car colors behave differently, care more about their vehicles, or drive more carefully. Similarly, ice cream sales and shark sightings may rise together, but sharks do not cause ice cream consumption. Warm weather brings more people to the beach, increases ice cream purchases, and also brings sharks closer to shore.
These examples show why causality requires deeper reasoning. Correlation is necessary for causality, but it is not sufficient. Two things can move together without one causing the other.
For PhD-level research, this is crucial. If a paper claims causality too strongly without the design to support it, reviewers will immediately challenge it. The safer and more accurate move is to state relationships carefully, then discuss possible causal implications later if justified.
Causal Agency: Who or What Drives Change?
The seminar then connects causality to Information Systems by asking a central question:
Who or what has causal power when technology and organizations interact?
This leads to three major perspectives: the technological imperative, the organizational imperative, and the emergent perspective.
The technological imperative assumes technology drives change. Here, technology is the independent force that constrains human action or shapes organizational outcomes. For example, if communication technologies flatten organizational hierarchy by making it easier for lower-level employees to contact executives, then technology is treated as the driver. Similarly, if AI replaces jobs, the causal force is located in the technology.
The organizational imperative assumes humans or organizations drive change. Technology is not forcing outcomes; people choose how to use, design, deploy, or fit technology to organizational goals. For example, if a company uses organizational slack to invest in IT innovation, the organization has agency. If managers align IT investment with organizational structure to improve performance, they are curating technology rather than being constrained by it.
The emergent perspective argues that outcomes arise from the interaction between technology, users, and context. Technology provides an occasion for change, but it does not determine the outcome by itself. The same ERP system may be embraced in one organization and resisted in another because people attach different meanings to it, trust different leaders, or operate in different cultures. Context changes how technology matters.
Why Context Changes Causality
The emergent perspective is especially important because it reflects the messy reality of organizational life.
The same technology does not produce the same outcome everywhere. An AI tool introduced by a respected leader may be seen as empowering. The same AI tool introduced by a disliked manager may be rejected. In one organization, employees may see ERP as a useful coordination system. In another, they may see it as surveillance or control.
This means causality is not always universal. Sometimes X affects Y only under certain contextual conditions. This is what makes the relationship contingent.
The professor describes context as something that can moderate the relationship between technology and action. It changes the meaning, use, and outcome of technology.
This matters for research design. If you are studying technology across many firms, you may use variance models. But if you are studying how people interpret and respond to technology inside specific organizations, a case study or interpretive approach may be better.
Micro and Macro Levels of Analysis
Another important idea is the level of analysis.
Causality can operate at different levels: individual, group, organization, industry, or society. A model may look very different depending on which level you study.
For example, at the individual level, employees may have discretion over how they use AI. One person may use it creatively, another cautiously, and another not at all. At this level, the organizational imperative may appear stronger because individuals have agency.
But at the organizational or industry level, the picture may change. If every competitor is adopting AI, firms may feel forced to adopt it simply to keep up. At that level, technology may function more like a technological imperative because organizations have less discretion.
This creates the possibility of multi-level causality. Micro-level use may condition macro-level outcomes. A firm may invest heavily in IT, but if employees do not actually use the tools in their daily work, the investment may not improve profitability. Conversely, if employees integrate the technology into core processes, the macro-level effect may become visible.
This is why researchers must be clear about whether they are studying individuals, teams, organizations, industries, or societies. The causal logic may differ across levels.
Every Theory Contains Causal Assumptions
The second part of the transcript shifts into a presentation on a more advanced paper about causality. The major claim is bold:
Every theoretical statement is secretly a causal statement.
Even statements that seem neutral often contain causal logic. “Social media affects mental health” implies a causal relationship. “IT investment drives performance” implies causality. “Users enact technology through practice” suggests that practice causes technology to take on meaning.
This explains why reviewers may disagree so strongly. One reviewer may expect testable propositions, variables, and causal relationships. Another may value rich interpretation, meaning, and process. They are not just disagreeing about the paper; they are operating from different assumptions about what good causal explanation looks like.
When researchers fail to make their causal assumptions explicit, reviewers may judge the paper using incompatible standards.
Pluralism: There Is More Than One Valid View of Causality
The paper discussed in the transcript argues for causal pluralism.
Causal pluralism means accepting that causality can be understood in multiple valid ways. Instead of insisting that only one type of causality is legitimate, researchers should clarify which kind of causality they are using and evaluate other work on its own terms.
This is important because different research traditions emphasize different causal ideas. Positivist researchers may focus on variables and directional associations. Interpretive researchers may focus on meaning, practice, and enactment. Critical researchers may focus on deeper mechanisms, power, and structural constraints.
Pluralism does not mean “anything goes.” It means being explicit, consistent, and fair.
A useful analogy from the transcript is health. A cardiologist may measure blood pressure. A psychiatrist may assess mental well-being. A nutritionist may examine diet. A physiotherapist may assess strength and mobility. None of them is wrong. Each captures a different dimension of health.
Causality works similarly. Different approaches capture different dimensions of causal explanation.
Three Dimensions of Causality
The advanced framework discussed in the transcript organizes causality into three dimensions: causal ontology, causal trajectory, and causal autonomy.
Causal ontology asks: What is causality? Is it a pattern, a real mechanism, or something constituted through meaning and practice?
Causal trajectory asks: What is changing, and how does change move across time and levels?
Causal autonomy asks: Who or what has causal power—humans, technology, or both together?
Together, these dimensions help researchers classify the kind of causal claim they are making.
Causal Ontology: What Kind of Causality Are We Talking About?
The first dimension, causal ontology, includes three positions.
The first is directional association. This is the logic of much quantitative research. It does not claim to directly observe causality. It identifies patterned relationships between variables. For example, IT investment is associated with firm performance. The evidence may come from regression, surveys, or experiments. The causal claim is usually expressed as X relates to Y or X predicts Y.
The second is causal mechanism. This view assumes causality is real but hidden. The researcher’s job is to uncover the mechanism linking X and Y. For example, instead of simply showing that IT investment improves performance, the researcher asks how it does so. Does it improve decision quality? Reduce coordination costs? Increase process efficiency? Strengthen data visibility? The mechanism explains why the relationship exists.
This view is closely aligned with critical realism. Critical realism assumes that mechanisms exist beneath observable events. We may not see them directly, but we can infer and investigate them.
The third is constitutive causality. This view questions whether traditional cause-effect language is sufficient for human social life. It focuses on how people create reality through meaning, practice, and interpretation. For example, plagiarism software does not merely detect plagiarism; it may change what a university defines as plagiarism. The technology helps constitute the meaning of the phenomenon itself.
This is a very different kind of causal logic. It is not just X causes Y. It is X participates in making Y what it is.
The Plagiarism Software Example
The plagiarism software example is one of the richest parts of the transcript.
From a directional association view, we might hypothesize that greater use of plagiarism software leads to more disciplinary hearings. That is a simple X-to-Y relationship.
From a mechanism view, we ask how plagiarism software changes behavior or interpretation. The mechanism may be that plagiarism software redefines plagiarism by making certain forms of textual similarity visible. Once the software exists, universities may begin to treat imitation differently because it can now be detected.
From a constitutive view, context becomes central. The transcript discusses Greek students entering a UK university. In their educational context, memorization and reproduction of texts may be part of normal learning. But in the UK context, the same behavior may be classified as plagiarism. The software does not simply detect misconduct; it participates in translating one educational practice into another institutional meaning.
This example shows why causality in social systems is not always mechanical. Technology can reshape meanings, categories, and institutional practices.
Causal Trajectory: How Change Moves
The second dimension is causal trajectory. This asks how change unfolds across space, time, and levels.
One form is cross-boundary change, where change moves across levels. It may move top-down, bottom-up, or through self-organization.
A top-down example is management pressure leading to technology adoption. A higher-level force affects lower-level behavior.
A bottom-up example is doctors resisting an electronic medical records system. Individual resistance accumulates, becomes collective resistance, and eventually turns into organizational opposition to the system. Here, lower-level action rises to affect the higher-level organization.
Another bottom-up example is swift trust in virtual teams. When individuals introduce themselves in a detailed and open way, small interpersonal actions create trust at the team level.
Another example is fashion. Fashion may be imposed top-down by major fashion houses, or it may emerge bottom-up through influencers and social groups.
The second form is indwelling change. This refers to change that happens organically within a bounded space. There is no strong external force pushing from above or below. Instead, people interact within a setting, develop norms, and change together. For example, when an email system is introduced, users may organically develop new communication practices within the group.
The third form is evolving interlinkage, where the network itself changes as connections among actors, technologies, and systems evolve. This is especially relevant for digital infrastructures, AI systems, and platform ecosystems where one change can reconfigure many relationships.
Causal Autonomy: Who Has the Power?
The third dimension is causal autonomy. This is especially important in Information Systems because IS studies the relationship between humans and technology.
There are three positions.
The first is human sovereignty. This view says humans have causal power. Technology does not act by itself; people use it. The familiar phrase “guns don’t kill people; people kill people” captures this logic. In IS, this view treats technology as a tool shaped by human intention.
The second is technology autonomy. This view says technology can have causal effects independently of continuous human control. Historically, some automated systems already had this quality, such as algorithmic trading systems. But generative and agentic AI make this issue more serious because these technologies may produce outputs or actions that are not fully predictable by designers.
This is why AI raises new causal questions. If an AI system generates unexpected actions, who or what caused the outcome? The user? The designer? The model? The training data? The institution deploying it?
The third is relational synergy. This view says outcomes emerge from the interaction between humans and technology. Causality is not located only in the human or only in the technology. It is produced by both together.
This position is increasingly important for studying AI, automation, digital platforms, and sociotechnical systems.
Why This Matters for Your Research
The practical takeaway is that every research project must clarify its causal assumptions.
Are you making a variance claim or a process claim? Are you saying X is sufficient for Y, or only necessary? Are you treating technology as the cause, humans as the cause, or the interaction as the cause? Are you studying micro-level action, macro-level outcomes, or both? Are you assuming causality is a directional association, a hidden mechanism, or a constitutive process?
These are not abstract philosophical questions. They shape your literature review, methods, analysis, contribution, and how reviewers evaluate your work.
For example, if your paper is interpretive but reviewers expect variance-based propositions, they may say the contribution is unclear. If your paper is quantitative but you use strong causal language without causal identification, reviewers may reject the claim. If your study is actually about stages but you model it as a regression relationship, your causal structure may be misaligned.
Final Reflection
This seminar teaches that causality is not a minor technical issue. It is the foundation of theory.
Every time we explain change, we are making assumptions about cause. Every time we say something affects, shapes, enables, constrains, drives, or produces something else, we are theorizing causality.
The strongest researchers are not those who casually claim causality. They are those who understand what kind of causal claim they are making and design their study accordingly.
A variance model explains relationships between variables. A process model explains sequences of necessary conditions. An emergent model explains how context shapes the relationship between technology and action. A mechanism-based model explains how and why an effect occurs. A constitutive model explains how meanings and realities are formed through practice.
Each is valuable. Each has limits. The key is alignment.
Closing Thought
Research becomes stronger when causality becomes explicit.
Before writing “X affects Y,” ask:
What kind of causality am I claiming?
Is X necessary, sufficient, or both?
Does causality move across levels?
Is technology driving the change, are humans driving it, or is the outcome emerging from their interaction?
Can my method actually support the claim I am making?
That is the discipline this seminar is teaching.
It is not enough to build a model. You must understand the causal logic beneath it.







Leave a Reply