- As AI and machine learning gain wider adoption, the issue of trust becomes more important
- Even big tech firms have had problems with AI going wrong
- Certification and quality control systems are being set up and may help build trust
As artificial intelligence (AI) and machine learning (ML) move out of the lab and into real-world business applications, companies are asking “can we trust AI and ML for decision-making?”
It’s a question that’s important not just for large businesses. The government’s AI Activity in UK businesses study in January 2022 showed that even among small firms, 15% had adopted at least one AI technology. The government data suggests that every working day 200 small- and mid-sized firms invest in their first AI application.
AI is being used in a wide range of applications including data analysis (such as demand forecasting), processing natural language (deployed in customer-facing roles such as web chatbots), and computer vision (often used in quality control).
Albert King, Chief Data Officer for Scotland, said in March: “Being trustworthy, ethical [and] inclusive are actually necessary conditions for comfortable adoption of AI technology.”