AI can help you in decision-making, but without any explanation of why it reaches such a decision reducing accountability and increasing self-fulfilling prophecy.
Suppose you earn a fortune and decide to do a business startup. You are excited about the idea, but some statistics change your mind. You know that many startups fail because the idea is poor, partners are not committed to the business, or the idea is conceived poorly. You know that 90% startups fail within the first year. If your startup wouldn’t last for a year, you feel like investing resources in a venture would be a mistake.
You live in the age of technology, and you heard that a company has released an AI model that can predict your likelihood of failure. The model is trained on datasets comprising hobbies of individuals, social media activities, online search histories, spending habits, and history of startups and failure. Based on this information, the AI can predict if a startup will fail within the first year of operations with 95% accuracy. The only problem with the model is that it does not give reasons behind its results. It simply predicts whether you will fail or won’t fail without saying why. So, should you make a business decision based on this AI prediction?
Suppose the model predicts that your startup won’t last for more than a year after started. In this situation, you have two options: (1) you could start the business anyway, hoping that the prediction is wrong, or (2) you could leave the business idea, assuming that it could cause more harm. Without understanding the reasons for your predicted failure, you would never know if those mysterious options would still emerge to ruin your bank balance.
Transparency Problem in AI :
The uncertainty affecting these options emerges from a well-known issue with AI related to transparency. This issue undermines dozens of potential predictive models, like the business credibility model that could predict which businesses will fail and claim insurance, or which ex-offenders commit crime if hired on the job. Without the reasons why AI makes a certain prediction, many people say they can’t think critically regarding how or whether to follow their advice. This transparency problem not just limiting our understanding of these models, but it also affects the decision-maker’s accountability. Suppose the model predicts that a potential university student won’t complete their degree, so the university refuses the admission application, but what explanation could the university reasonably provide? that a mysterious machine predicted that you would not complete your degree. This will be hardly fair for the student. While we do not require giving explanations of our behaviors to people, but when do so the limited transparency of AI can develop ethical dilemmas and accountability, a single tradeoff we make when we ask an AI to make a decision.

Tradeoff between Accuracy and Transparency :
If you’re optimistic to outsource you decision to an AI, you do this because of the accuracy of the prediction. Based on this thinking, if you leave a promising startup idea – any other personal or practical life decision – you do this because AI suggests that you will fail in such decisions. However, if authenticity or transparency is a priority over accuracy, then you will need to look and appreciate the reasons behind a decision – such as why a business would fail – before leaving the idea entirely. Such authentic decision-making is important for accountability, and this might be a great opportunity to prove the AI wrong.
Alternatively, there’s a possibility that the model already accounted for your attempts to defy it and you’re just setting yourself up for failure. The 95% accuracy is high but isn’t perfect, meaning that 1 in 20 people will receive a wrong prediction. As more people uses such a model, the likelihood increases that a business, a marriage, or an employee predicted to fail will do so because the AI predicted. If such a thing happens, the AI’s success rate would be superficially maintained or even increased due to the self-fulfilling prophecy of the AI. Regardless of what AI predicts, it’s up to you to accept or reject the decision.