AI Large language models will change and open

 

“Open-source AI will continue to improve and be taken into widespread use. These models herald a democratisation of AI, shifting power away from a few closed companies and into the hands of humankind. A great deal of research and innovation will happen in that space in 2024. And whilst I don’t expect adherents in either camp of the safety debate to switch sides, the number of high-profile open-source proponents will likely grow.”

Andy Patel, Senior Researcher at WithSecure

 

“While Large Language Models (LLMs) can analyse enormous loads of unstructured data, they are off limits to all organizations apart from those with very deep pockets. Most companies have so far restricted their generative AI usage to single projects such as smart process automation or for increasing workforce productivity. But this may change if the models are decentralised and democratised so they can be made more affordable. Expect to see smaller models, which are less power hungry, less susceptible to the hallucinations common to LLMs and more accessible, come onto the market next year.”

Naren Narendran, chief scientist, Aerospike

How AI and LLMs interact will change

 

In 2024, we’ll see the rapid sophistication of AIOps–the process of using big data and machine learning to automate IT operations–which will reveal a new world of interactions and capabilities for large language models. Instead of applications that serve as a simple pass-through to LLMs, I predict the proliferation of prompt engineering tools that automatically enrich users’ inputs and iterate with LLMs to derive better results. With more efficient AIOps, enterprises of all kinds will have access to scalable automation.

Sean Scott, chief product development officer, PagerDuty

ChatGPT turns one: How the AI changed the world

A focus on hallucinations

 

“In 2023, “hallucinations” by large language models were cited as a major barrier to adoption. If generative AI models can simply invent facts, how can they be trusted in enterprise settings? I predict, however, that several technical advances will all but eliminate hallucinations as an issue. One such innovation is Retrieval Augmented Generation (RAG), which primes the large language model with true, contextually relevant information right before prompting it with a user query. This technique, still in its infancy, has been shown to dramatically decrease hallucinations and is already making waves.

Adrien Treuille, director of product management and head of Streamlit, Snowflake

 

“It wouldn’t be surprising if the tech space remembers 2023 as the ‘year of the Large Language Model (LLM)’. Whether this persists into 2024 remains to be seen. It’s likely that many businesses currently scrambling to incorporate LLMs into their offering will realise that LLMs can only get us so far without resolving its hallucination problem with neural-symbolic methods”

Janet Adams, COO, SingularityNET

Code will advance with AI

 

“The evolution of AI in DevSecOps will transform code testing over the next couple of years. Currently, 50% of all testing is conducted with the help of AI. Expect this to reach 80% by the end of 2024, approaching 100% automation within two years. As organisations integrate these AI tools into their workflows, they will grapple with the challenges of aligning their current processes with the efficiency and scalability offered by AI. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices.”

David DeSanto, chief product officer at GitLab

Janet Adams, COO, SingularityNET

 

But AI generate code will prove faulty

 

“In 2024, more organisations will experience major digital service outages due to poor quality and insufficiently supervised software code.

Developers will increasingly use generative AI-powered autonomous agents to write code for them, exposing their organizations to increased risks of unexpected problems that affect customer and user experiences. This is because the challenge of maintaining autonomous agent-generated code is similar to preserving code created by developers who have left an organization. None of the remaining team members fully understand the code. Therefore, no one can quickly resolve problems in the code when they arise. Also, those who attempt to use generative AI to review and resolve issues in the code created by autonomous agents will find themselves with a recursive problem, as they will still lack the fundamental knowledge and understanding needed to manage it effectively.”

Bernd Greifeneder, founder and CTO, Dynatrace

 

“As AI-powered code creation becomes increasingly adopted by organisations within their software development practices, the risk of significant AI-introduced vulnerabilities and intellectual property loss emerging in the next year is high. The situation will only worsen unless privacy and intellectual property (IP) protections are prioritized around how AI-powered code creation is adopted and deployed. The industry stands at a critical juncture: Without a concerted effort to integrate robust privacy measures, IP leakage will persist and intensify, leading to potentially widespread repercussions for software security, corporate confidentiality, and customer data protection.”

David DeSanto, chief product officer, GitLab

David DeSanto, chief product officer, GitLab

 

“Watch out for how the hyperscalers use AI to enable developers to accelerate development cycles by the automated generation of code. This is even more powerful than the content generation illustrated by LLMs to date. The use cases, such as generating the code to visualise a dashboard by using generative AI to interpret a screen grab, are compelling.

“They come with risk, so enterprises will need to get better at managing the ethical and controls implications of AI, by grasping the risk, regulatory and governance effects upfront.

Adrian Bradley, head of cloud transformation, KPMG

 

“The biggest challenge to AI adoption is hype: both too much and too little hype. On the over-hyping side, there is a serious risk that developers will expect tools like Amazon CodeWhisperer to do their work for them, resulting in bugs or other problems. If you’re expecting a code generation assistant like CodeWhisperer to produce perfect code that you can accept and use wholesale without change, you’re going to be disappointed. AI should be used to augment a developer’s abilities, not to replace them. Which means that a developer needs to bring their expertise to the AI coding party: they’ll need to be able to work out where AI gets things wrong, and know how to prompt it to fix things (or do it themself).”

Matt Asay, vice president of developer relations, MongoDB

Personalized Feed
Personalized Feed