Understanding cause and effect based on observational data primarily means adjusting for potential “confounders” (indirect explanations for a target event). Missing information, therefore, leads to flawed conclusions on causal effects. This also holds if you can analyze only parts of a complex object at one time.
Xplain Data’s holistic Object Analytics, therefore, offers new ways to uncover potential cause and effect relationships: the wealth of information stored in such an object model is used to quickly evaluate millions of potential confounders. Only factors whose effect cannot be explained “via other factors” (confounders) are presented.
Without an experiment this is still not proof of causality. However, Causal Discovery can segregate away myriads of meaningless correlations, and thus help you to quickly reach relevant hypotheses on causal effects.
Knowing causal dependencies means being able to influence a system – a major step toward intelligent systems in real-world environments.
Example: Factors potentially causing breast cancer, including a graphical representation that visualizes direct and indirect effects on the target of analysis.
The Causal Discovery algorithms are embedded into the Object Explorer and its interactive usability concept. Once configured, just click on a target, and you will see what drives or potentially causes this target. The domain expert can reject certain factors, and ask for alternatives. In this way, you can arrive at results, which combine knowledge from data with domain expertise.
Any of these algorithms can also be used from within a Python environment. Listen to Paula to understand our Causal Discovery approach on an intuitive level.