Unless you are living in some cave deep into the Mariana Trench, you must have heard of the word ‘artificial intelligence’. Those who have watched The Terminator movies or the 100, among other popular movies and TV shows, have definitely heard of it. Well, what is it? It is a branch of computer science that deals with the simulation of intelligent behaviour in computers. Instead of one giving instructions to a machine, it can learn to perform some tasks for itself without human intervention.
One of the main goals of AI is to enhance the collaboration between humans and machines. This type of collaboration has been going on for millennia, where humans have learnt to use tools, from (fire to bronze tools), around them to improve their lives. In this context, computers can be regarded as one of the tools that has greatly impacted our lives. We interact everyday with computers, and in fact, we would find life extremely difficult without them. But what if we could improve the way we use these computers? Consequently, our lives would be better and we could solve some of today’s problems. And that’s what AI is trying to do. Since AI is a relatively new field, many people in the computer science field believe it has not yet reached its full potential. So, don’t be worried, AI is not going to take over the world and kill us all. After all, it is being built by humans themselves.
Ai is already in use. For example, speech recognition in Google Assistant, Siri and Alexa. It is also being used by corporations in service delivery, (since it saves money) such as virtual customer care assistants, and posting news articles on Microsoft Edge’s homepage. And there are so many more applications to it. Yet still, innovators around the world are still coming up with ingenious ways to apply this technology into solving their problems. One of these innovators is Mark Sagar, CEO of Soul Machines, based in Auckland, New Zealand. He is trying to create characters, in this case virtual characters, that a person can actually interact with, more or less the same like a virtual assistant, but the engagement is more ‘natural’. In this context, one sees an avatar on the screen and engaging with it in a conversation, (almost like video calling your friend). To make the conversation at least more natural, unlike the scripted digital characters acting as customer service reps, the avatar must be able to learn, interpret and interact with their environment. The human image (actually the AI) is driven by neural networks, which are simple an interconnection of nodes that receive, process and send information. They are the building blocks of AI. In comparison, the average human brain contains around 85 billion neurons(‘nodes’), which makes the human brain monstrously supersede AI, and also makes it almost impossible to recreate a human brain.
One of the methods used to train the AI is known as ‘object recognition’. Through a camera, thousands of different objects are displayed to the program so that it can be able to recognize patterns and consequently correctly identify objects by themselves. It’s basically how you can tell the difference between a bicycle and a cow, or two different faces, albeit subconsciously and effortlessly, thanks to the billions of neural networks in the brain. Mr Sagar has also gone further to enhance his AI by ‘adaptive computing’. He managed to simulate neurotransmitters and even hormonal release within the avatar he created, to make it even more attractive. In this case, the avatar was a baby. Thanks to the fact that it had adaptive computing, the avatar could cry upon being scared or laugh and giggle in a ‘peek-a-boo’ game. Based on this, it is feasible to create robots capable of empathy that can be used to give comfort to old people in nursing homes. At this point I could imagine telling a joke to an avatar and it actually laughing. Fancy it seems, but it is very possible.
Mr Sagar, together with his team from soul machines, embarked on an interesting project. They intended to create an avatar of will.i.am, a popular musician (a member of Black Eyed Peas. Am sure zou have at least heard the band’s songs). Will.i.am had this ambition of creating a digital version of himself, and he was pretty excited with the prospect of him interacting with his own intelligent avatar. So how did Soul Machines go about this? Well, numerous pictures of will.i.am were taken, with him doing all kinds of facial expressions. Next was to recreate his voice, and he was recorded reading some many phrases. Combine the images (by modelling them in 3D) and the voice, and there you have an avatar (this is a laborious process, and definitely not as easy as it sounds). At initial testing stages will.i.am seemed content with the way his avatar was fairing on.
There are many more ways in which AI has found application in problem solving and bringing new insights to everyday normal situations. I have a number of them I personally found interesting to talk about, but they cannot be squeezed into a single article. Please be on the lookout for more of my articles to find out more applications of this phenomenal technology. Thank you.