What is AI and where does it fit?

Aug 14, 2021 | Artificial intelligence

Digital transformation unlocked a tidal wave of new data. It gave the promise of new insights, more relevant services, and more efficient communication. However, the reality of managing 'big data' has been quite different – until artificial intelligence (AI) arrived.

In media and marketing, it’s been riddled with challenges of the storage and the analysis not matching up to the promise. There was simply far too much data to process by humans or traditional computing – which is largely human-operated too. The data’s there – it’s just not acted upon.

Enter, AI. It helps organize and analyze that vast amount of data. It delivers outputs and enables decision-making both at scale, and at speed. It opens new opportunities beyond the reach of human skill, traditional computing algorithms and even human comprehension. AI is the key that unlocks the future ‘big data’ promised.

And the application of AI in the mainstream of business is in its infancy. It’s finding its feet, but nothing compared to what will come during your career. Its capabilities are rapidly evolving, and it will change not only our world at work, it will even change the careers of people across our industry. And even though its application is still young, it’s already embedded in many business operations and finding new ones daily.

What is AI?

AI is a computer program that mimics some aspects of human intelligence. It can learn from examples and experiences, recognize objects, understand and respond to language, make decisions, and solve problems. So this means it can come up with a solution, without a human directing it how to get there.

So what’s the difference between traditional programming and AI?

It’s a paradigm shift in computing. Traditional programming powers your laptop, provides the source code for your web pages, runs your mobile apps, and is behind every software package you use. Every text message sent, email opened, online shopping product added to your cart – traditional programming is there.

It works by understanding the output users want (or the question that needs answering) and has a structure for inputting available data. It uses ‘conditional statements’ (such as ‘if this happens / then do that’), pattern matching, and has branches that process different parts of a task. Programmers codify these rules, writing sets of step-by-step instructions as part of the algorithms that form the software. These instructions dictate how to process defined types of data inputs to deliver desired outputs. That could be for something as simple as when you opened this webpage or as complex as calculating the lunar trajectory of the Apollo moon landings.

In AI, the inputs and desired outputs are determined by people, but the development of the algorithms is left to the machine to create. As humans, we might help with supervising development to nudge it in the right direction, but we’re not writing those parts of the code. We may not even be giving all of the inputs. The AI has to create these parts. Traditional programming still plays a part – giving the framework and user interface – but the AI sitting within it is the hero.

What’s the same between traditional programming and AI?

They’re both software, they both automate actions at scale, they both use algorithms – they are just very different in how they do it. And that means they’re very different with regard to the scope of problems they can solve.

In traditional programming, to deliver the right output, the software needs to cover all of the steps, complete all the computations, be told how to deal with all possible types of inputs, and have this all in place from the outset. And that’s their limitation: every option needs to be planned for. These are massive decision trees, with all the pathways pre-defined.

The AI algorithms are more flexible. They’re created by ‘trial and error’ learning. They evolve over time to deliver more accurate results and/or to take different inputs. They’re improving, and not simply processing, and because of this, the code behind them is more difficult to understand by humans.

When do you use each?

  • Traditional algorithms work well in situations when you have known inputs, there are no unknown relationships between data and the computational steps to process the data, and it is achievable for a human to codify the logic – quantified data, clearly defined variables, within what was anticipated by the programmer.
  • AI approaches work well when you have many variables or unknown relationships, or the data is too messy or unclear to cover every possibility with pre-determined algorithms. That’s why it deals well with many real-world scenarios that include the diversity of people’s behavior and the complexity of human systems.

General AI

Artificial General Intelligence (AGI), also called Strong AI, is AI that more fully replicates the autonomy of the human brain. A self-driving car recognizing traffic signs, assessing risk, and adjusting to changing road conditions to efficiently and safely navigate to the destination would be in this category.

Narrow AI

Narrow AI or Artificial Narrow Intelligence (ANI) – also called Weak AI – is AI trained and focused on performing specific tasks.

Super AI

Some also use the term Super AI, which is far superior to human intelligence. A computer system that has achieved artificial superintelligence would have the ability to outperform humans in almost every field, including scientific creativity, general wisdom, and social skills – perhaps even becoming self-conscious. Unlike movie stars like ‘HAL’ and ‘The Terminator,’ today’s AI is at the stage of Narrow or at most General AI – and requires a lot of human work to be set up, tested, and managed.

Key enabler: data storage and computing power

All this data needs to be stored and processed. And processing vast amounts of data requires powerful and specialized computers. The falling cost of data storage and processing power, combined with increases in the performance of computer chips, make AI applications possible and affordable. We’ve crossed a tipping point, which is why you’re seeing them mainstreaming and increasingly being offered as SaaS solutions (software as a service) you use from 3rd parties.

Data: why it’s the new oil

Ever heard of data described as the new oil? That’s not just about its value. We love that analogy. It’s usually messy, difficult to store, impossible to use in its raw form, and only valuable when it’s refined. In any dataset, there’s a lot of clean-up needed before you start. It’s not simply about connecting to an API.

Training your AI

AI needs data to be trained. The good news is there is now more data available than ever. And the digitization of business and society means this is exploding. In fact, the problem now is there’s so much data, that it becomes impossible to act on it (without applying AI!). In 2025 the global datasphere could be about 175 zettabytes – that’s a 175 trillion gigabytes.

Interesting fact: The rise of NVIDIA

Like every tech revolution, AI creates the potential for rapid growth amongst the companies that empower it. Perhaps you know NVIDIA? The company was originally famous for its powerful GPUs (Graphics Processing Units – computer chips specializing in processing graphical information) and graphics cards used by gamers and video editors. It appeared that GPUs worked well for AI applications and became an essential part of a modern artificial intelligence infrastructure, making NVIDIA an unexpected leader in AI. And that’s why its stock price defied gravity for years.

All topics

Previous editions

Get email edition