With Generative AI, AI Becomes Bigger and Better

August 14, 2023
With Generative AI, AI Becomes Bigger and Better

As one of the first break-out examples of applied Generative AI, ChatGPT grew from nothing to 100 million users in less than 2 months. Not even TikTok grew that quickly (9 months). In the public consciousness, AI has gone from a nebulous staple of sci-fi movies to hands-on productivity tool in less than a summer vacation season. Everyone seems to be typing prompts to instruct generative AI, from business executives getting market research to high school students doing book reports. 

But AI has been around a long time, so what makes generative AI so special, yet so available? Well – according to ChatGPT itself, its generative AI is: 

  • Designed to generate human-like text, coherent and contextually relevant, based on the input it receives.  
  • Pre-trained on vast amounts of text data, which allows it to learn language patterns and nuances from diverse sources. This pre-training makes it effective at understanding and generating text. 
  • Highly effective for tasks involving language translation and text generation. 

Clear enough? 

We brought in Greg Hodgkinson, Prolifics CTO, to further explain this latest evolution of AI and what it may mean for us all. 

We’ve seen AI work at businesses for a long time, with things like analyzing data to spot trends, finding problems, and rooting out inefficiencies. What was different then?

Before generative AI, if you wanted to use an AI model, you would need a lot of data and a huge amount of time to train that AI model to do a very specific thing, based on your data. It’s all been very doable, and we’ve been doing that for decades. So, for a traditional AI project, we had to: 

  • gather the data 
  • make sure that the data was good enough quality to train with  
  • get the data into the format required to do the model training 
  • do the actual training of the model 
  • see how successful the model was at doing what you wanted it to do  
  • repeat, if necessary, that whole process until it gave good outputs 

And how does this differ with the new type of AI?

Generative AI starts with what is called a “foundational model.” Instead of you having to train your own model, you can take a foundation model off the shelf – and it’s a massive model. The models in the past, which were probably created and trained on pretty big data sets, would only do one or two specific jobs. Today, you can take this massive model off the shelf that’s been trained to seemingly do almost anything you can throw at it. And you didn’t have to do any of the training for that model. You might fine tune it, but it’s really off the shelf. 

Think of the first time you used ChatGPT to do something that was meaningful for you. You didn’t have to lift a finger, you didn’t provide it with any data, you did nothing but prompt it with some instructions. And the reason it’s so capable out-of-the-box is because somebody else has done all the training of the foundation model, but they’ve done it on significantly massive data sets. That’s what makes it so “knowledgeable” and so capable as well, and means that it is ready to do meaningful things for you. 

Here’s a silly example. If you say, “What’s 5 plus 3?” into ChatGPT, it’ll give you the right answer. But it’s not because someone guessed that you’d want to use it as a calculator and therefore programmed a calculator into it. It’s because somewhere in the body of knowledge that it’s been trained with, there’s elementary mathematics. But not just mathematics, it has the foundational knowledge for a seemingly limitless variety of tasks that were previously trickier to automate. Like “write me a poem” or “summarize this short story into a few lines” or “explain quantum mathematics to me like I was an 8-year-old” or “give me ideas for a birthday party for someone who hates birthday parties.” 

How will these foundational models affect businesses?

If you wanted to make automation smarter with “traditional” AI, the cost and the time it took to create a model was significant, and would require expensive resources, which meant the ROI was harder to justify – and this became a barrier to many good ideas being implemented. Now, all of those limits have disappeared, because you have these foundation models that are so much more capable, accessible, and immediately available, allowing you to implement your use case at a fraction of the cost, with in many cases a far superior and capable output.  

With a massively capable foundation model that’s general purpose, it may be good enough to do your project straight off the bat. But you could also supplement it with a little bit more data – that’s called fine-tuning a foundation model. And then it would most likely be just as capable as the model that you would have trained yourself from scratch. So, the barrier to AI entry is a lot lower and almost nonexistent now. 

There’s always talk of winners when a new technology comes out. What do you see in this case?

To me, there are probably three different types of winners. The most exciting kind are those people that can now pick up these AI capabilities and add value on top of it. The “creators.” Other winners are going to be the businesses that will leverage this technology to reduce their costs and give better customer service. And then the third kind of winner would be people in our industry that are going to be indispensable in making this happen – enabling businesses to operationalize the models and platforms that allow them to create their own value. 

Do you want to be one of the “exciting” kind? Figure out how it could make you more productive. How do you take your abilities and your skills and then add value on top of generative AI? Because there are still going to be limits to what it can do. So, figure out how you can add value – and increase your ability to do more than you did previously. Use it to research. Use it to ideate. Use it to create. Use it to sharpen up what you’ve created. But don’t ignore it, because you may miss out on the biggest development since the dawn of the Internet. 

 

Greg Hodgkinson is Prolifics’ Chief Technology Officer and Worldwide Head of Engineering, and an IBM Lifetime Champion. As a technology leader, he’s responsible for innovative cross-practice solutions for our customers, creating a foundation for innovation in the company, and driving improvements in the art of software development and delivery throughout Prolifics.

 

Related blog posts: 

Modernize Your Customer And Employee Experience With Conversational And Generative AI | Prolifics US 

​​​Navigating The Integration Maze: Challenges In Incorporating LLM’s Into Your Application​ | Prolifics US 

 

About Prolifics

At Prolifics, the work we do with our clients matters. Whether it’s literally keeping the lights on for thousands of families, improving access to medical care, helping prevent worldwide fraud or protecting the integrity and speed of supply chains, innovation and automation are significant parts of our culture.  While our competitors are throwing more bodies at a project, we are applying automation to manage costs, reduce errors and deliver your results faster. Let’s accelerate your transformation journeys throughout the digital environment – Data & AI, Integration & Applications, Business Automation, DevXOps, Test Automation, and Cybersecurity. We treat our digital deliverables like a customized product – using agile practices to deliver immediate and ongoing increases in value.  Visit prolifics.com to learn more.