Opinion | When it comes to using AI, the real question is one of trust
- Mechanisms need to be put in place to ensure users can trust recommendations, predictions and decisions made by AI systems. AI needs to be explainable rather than a black hole.
These events are not surprising for any new technology with such a wide potential impact on our lives. Careful attention and scrutiny should be paid to the accuracy of the technology, as well as to informed consent, legal liabilities and ethical use. These are all healthy steps towards ensuring legitimate use while respecting the rights of individuals.
But artificial intelligence and deep-learning technology need not be privacy invasion tools.
The AI technology behind facial recognition has much wider uses. It can be used not only to identify a person, but also to detect the presence of a face and its features, as well as to predict health, emotions and mental states. Depending on the application and what algorithms are used, the data privacy implications may be different.
Recognising faces is also only one of the many important features deep learning technology has to offer. The most interesting is probably facial analytics, which analyse photos or videos to extract facial characteristics and predict gender, age and even body mass index. It has been used to detect early signs of diabetes, heart disease or dementia, and in measuring heart rates and blood pressure.