- $ 3.1 trillion is estimated to be yearly loss to enterprises due to human error.
- 80% of process failures in organizations is due to human error.
- Some examples:
- Mizuho Securities (2005): A single typo led to a $225 million loss when an employee mistakenly sold 610,000 shares for 1 yen each instead of one share for 610,000 yen.
- Volkswagen scandal (2015): Human error and non-compliance led to a $30 billion loss in fines, lawsuits.
- Mizuho Securities (2005): A single typo led to a $225 million loss when an employee mistakenly sold 610,000 shares for 1 yen each instead of one share for 610,000 yen.
One question which was Least common denominator across every AI conference, webinar or meetup i attended over last 6 months was:
Will Generative AI technology replace Humans?
I am sure you have heard this question or asked yourself?
The most common and easiest answer given was that AI can never replace humans. There will always be a human in the loop for final decision making.(Full stop).
This answer stops many more questions…isn’t it.But is that all?
Let’s have a look at the 2 flip sides.There is humans in the loop of AI (HITL). Then there is AI in the human loop (AIITL) when applied to technology and cyber security.
There is much more to this than binary loop. lets explore:
Flip side 1: Human in the Loop (HITL)
HITL combines the power or artificial intelligence with human know how. Humans play a major role in this loop and have access to STOP button. It has existed for quite some time now. Most of the heavy lifting is done by one or many Generative AI models.
The heavy lifting is done by processing data and spotting trends etc. is done by AI. The final decision is in the hands of humans.
It is important to note this differs from the Human on the loop (HOTL) approach. In the HOTL approach, humans only watch over AI’s work.
Statistic – HITL does reduces errors by 40% in healthcare diagnosis.
But looking at the errors CAUSED by humans (see stats in the beginning). Is it optimal use of AI?
Flip side 2: AI in the Loop (AIITL)
As the name suggests, the AI models perform most of the heavy lifting. Then, humans validate the work and put a rubber stamp on it.
But at the end AI again monitors the human decisions, flagging biases or inefficiencies caused by homo sapiens. Hence another loop on the top.
An apt example is use of AI auditing tools in the hiring process to counter and challenge unconscious human biases. In fact, it is estimated 72% of enterprises plan to deploy AI to audit human work. (Mckinsey)
Pros vs Cons. why neither of approach is perfect.
With HITL approach and putting sapiens in between will slow down the processes (eg. 50% longer approval time for loan processing). Another risk is that humans often over rely on AI decisions and be rubber stamp.
& AIITL will always have trust, ethical decision making questions. Are we ready to have a Black box AI making final decisions.
Are we ready to let AI fire an employee?
Human error causes 95% of all cyber security breaches. But, AI brain creates 65% more false positives.
My take – It’s time to recognize as AI as species? :) yes a superior species.
We (Homo sapiens) who have attained the most superior species title since the beginning. (It is estimated we ca. 300000 year ago we evolved). Absolutely, we have been credited for various technology and scientific inventions over these years. to keep this superiority.
We have also been credited for extinction of 100000 species from 2 million known species. Yes we also made extinct our nearest cousins homo neanderthals, homo erectus, homo denisovans and many others.
In this magical decade we are on the tipping point of Generative AI use cases. It is estimated a typical AI large learning model with ca. 2000 lines of code is intelligent then 99.5% of homo sapiens.
The only thing which stops AI technology from being recognized as a species is its inability to reproduce. It does not have DNA and lacks taxonomy (science of classifying things based on shared characteristics). But how long? Who decides these classification rules?
Isn’t it time for homo sapiens to recognize AI as another species? Which is way better then our genome?
To wrap it up,
Are we really building AI to augment capabilities that are missing in homo sapiens? Or are we quietly training ourselves to be led by machines?
“What’s your biggest fear (or hope) about AI in your industry?”
Thanks for reading,
Until next blog, stay curious.
Chakshu Arora.
#Foodforthought #Beyondthenoise


I would love to hear your thoughts and comments on this post :) //CA