Turning Data into Insights: The challenges ahead
We all have been introduced to artificial intelligence at one point or another through the things we see in movies or those we read in books. They can be friendly robot-like humanoids or alien overlords trying to dominate and take over the Earth.
Looking at how popular culture has depicted artificial intelligence shows us two things. First, it makes us realize the various potentials of artificial intelligence. And two, it shows just how misunderstood the technology is.
Is artificial intelligence just the stuff of science fiction, or is it possible for us to create something that can behave and think in such a way that we would classify as “intelligent”? We define intelligence as something that can process information and retain it. For something to be intelligent, it has to understand the information.
Right now, we have artificial intelligence that can go through large amounts of data, but is it reasonable to expect these technologies to understand the data that they are processing? The answer depends on how you approach semantics and data.
Experts have been saying that for artificial intelligence to be of any value to us, we need to feed it with meaningful data. But what constitutes “meaningful data”?
- Data cohesion: Or information that is more or less consistent, making it possible to process and understand using a unified context.
- Data connectedness: Information that has clues and relationships so that it will be easier for algorithms to see connections and trends in them.
- Data semantics: This makes data open to interpretation and often results in actionable outcomes.
Let’s look at all these with an example. Say that a patient goes to different medical institutions and talks to different doctors and healthcare providers. Each one of these providers will be taking information from the same patient using their own systems.
Meaningful data in this instance will have a way to take all the individual information in its various silos, take them all together, and connect them to the patient. Once you identify the patient and collect all relevant information, you will also need to standardize the formatting of data.
After that, you will need to “cleanse” the data you have and reconcile them. For instance, the patient will have had different blood pressure readings on different days. In one hospital, they might have had to climb a flight of stairs or two, making their blood pressure shoot up. In another, they might have been stressed. In yet another, they might have been more relaxed.
As you can see, even when you are dealing with just one person and the data trail he or she leaves behind, you will encounter a lot of challenges before you can turn data into a fact we all can understand. That is not a problem with a human who can sift through the records and be able to sort them accordingly. When a person sees a higher blood pressure reading and sees that the doctor’s office is on the fourth floor in an edifice with no elevators, then he or she can deduce that the higher blood pressure might be because of the added work of climbing two flights of stairs.
With machines, however, there is nothing like that. At the very least, artificial intelligence needs to work with big data before being able to have the right context for its algorithms. IT teams who are developing artificial intelligence systems will need to ensure that they have meaningful data first, with a unification strategy in place, and find out ways to combine it with data sourced from third party companies.
On top of more and more data, you also need a lot of computational resources. All of these while ensuring that you are kosher with data privacy rules and regulations. You also need to make sure that you give every patient the right to control how their personal data is going to be used and what it should be used for.
Photo courtesy of Mike MacKenzie.