Artificial Intelligence: Mankind’s greatest invention yet II

Artificial Intelligence: Mankind’s greatest invention yet II

AI is being employed to improve and boost the capability of human tasks using computers. The following is a series of short stories of how people around the world are using AI to make human tasks simpler while consuming significantly less time, and also improving life in general.

AI can improve human lives by improving diagnoses of diseases that affect millions of people around the world. Tim Shaw, formerly an excellent NFL American football player, was diagnosed with a neurological disease known as ALS (Amyotrophic Lateral Sclerosis). 

This disease causes death of nerve cells connected to the muscles that control voluntary action. As a result, Tim, an athlete with superb health condition, could not even do the basic of things, such as walking or speaking properly and even scratching his hair. 

You can imagine how problematic it was for him to communicate, even with his caregivers (his parents), as a result of his condition. So, Google programmers, led by Dr Dmitri Kanevsky and Dr. Julie Cattiau, together with the DeepMind team, working on Project Euphonia, helped him out. 

Together, they created a speech recognition (conversion of speech to text) model that was able to recognize his impaired speech. 

The team also recreated Tim’s voice in whatever he typed on is phone, via a voice synthesis (conversion from text to speech) algorithm. The datasets for this project was obtained from the American ALS Institute together with numerous recordings of Tim’s own voice.

 With this, they could communicate more effectively when the voice recognition app (of course powered by AI) was installed on their phone. The joy and emotion seen on Tim’s parents when the scientists were doing the app trials with Tim is indescribable. This can be done to help people whose talking has been affected by other types of neurological diseases, such as strokes and brain injuries.

A particular type of diabetes can cause blindness if left untreated for a while. This disease (diabetic retinopathy) causes tiny bleeding spots in the retina and causing hazy vision. As with most diseases, the earlier it is detected, the more easily it can be treated. 

The problem is that there are very few ophthalmologists (eye doctors) to do the eye screening. The number is even fewer in third world countries. 

In attempts to get ahead of this problem, researchers at Google came up with a model to help read the retinal images and decrease the number of doctors required to do the task. For this project, over 100,000 scans were analysed by eye specialists who rated each image on a scale from one(healthy) to five(diseased). 

These graded images were used to train an image recognition algorithm, and using the massive dataset, it could determine which retinal images had retinopathy in an effective way, and in real time. The patient could know whether he or she has the disease and also get recommendations almost immediately after the eye scans had been taken, instead of having to wait for months for one to get results, while his/her eyesight worsened. 

If one was found with the disease, he/she can get treatment early enough and saving the ability to see. To make it more exciting, this AI method can be used to train algorithms that detect other types of chronic ailments, like cancer.

Since time immemorial, human beings have wondered if we are alone in this universe. Sometimes I look up at the night sky and ask myself if there is any other form of intelligent life within the numerous galaxies of the universe. 

Astronomical discoveries over the years are increasingly pointing to a likelihood of other lifeforms outside the confines of the solar system. But we haven’t found any clue pointing to that. SETI Institute, founded by Frank Drake, Jill Tarter and Carl Sagan in 1984, attempts to find extra clues on this enigma. 

Technological advancements since the institute’s founding has allowed it to gather massive amounts of data. The introduction of AI allows the scientists to see patterns in the data, which is beyond the capability of the human brain. 

Powerful telescopes, like the Allen Telescope Array, can capture a great amount of frequencies from space. However, doing this on a 24/7 basis generates too much of data that will take a long time and tedious effort to analyse. So, scientists calculate a specific time in which a particular celestial body, say a planet, will be in perfect alignment with the Earth, to make the data generation easier (don’t ask me how). 

At the time of alignment, called the conjunction time, (albeit a couple of minutes or so) telescopes capture radio waves from the bodies under observation. The computers digitize the waveforms and this is the point where AI is thrown into the mix. The digitized waveforms are analysed by AI models and compared to other previously generated datasets to come up with meaningful patterns. At initial stages it might sound vague, but with repeated recordings and analyses, something is dead sure to come up.  

Have zou ever imagined of human like robots? 

I’m talking about robots that can pass the Turing test. Ms. Suzanne Gildert, founded Sanctuary AI, a start-up based in Vancouver, Canada, in attempt to actualize this concept. Designing a robot that is close to a human involves problem-solving in many areas, from engineering, biology to even art and design. 

In this process, the start-up team has to deeply understand what it means to be human in order to come up with a proper design. To come up with a robot that looks like a human (at least the looks), parts that resemble human external body parts are 3D printed and joined together. The robot is given a sense of touch by placing and wiring capacitive pressure sensors on the ‘limbs’ of the robot. The ‘muscles’ of the robot are pneumatic actuators and the ‘eyes’ are simply cameras. The robot developed by Sanctuary AI can do gaze tracking, have a simple conversation and can respond to one’s emotion based on his/her facial expression. The robot can be given a backstory, for example childhood memories, to try to ‘think and behave’ like a normal person.

  This is made possible by the AI modules programmed into a processor inside the robot’s head. The team is currently trying to recreate the human mind. 

However, this is almost impossible, for the human mind is consciousness, something which no human being (at the time of this writing) can understand fully enough to replicate it in something else yet (you can count me in this category).

Please make sure to follow up on other areas that AI has been successfully integrated and how it is changing circumstances around their users in my subsequent articles. 

[i] Credit to The Age of AI by Robert Downey J

+ posts

Leave a Reply