close
close

Thoughtworks reports rapid growth in AI tools for software developers

Thoughtworks reports rapid growth in AI tools for software developers

AI tools and techniques are rapidly expanding in software as organizations look to streamline large language models for practical applications, according to a recent report from technology consultancy Thoughtworks. However, improper use of these tools can still pose challenges for businesses.

In the company’s latest Technology Radar, 40% of the 105 identified tools, techniques, platforms, languages ​​and frameworks labeled as “interesting” were related to AI.

Sarah Taraporewalla leads Thoughtworks Australia’s Enterprise Modernisation, Platforms, and Cloud (EMPC) practice in Australia. In an exclusive interview with TechRepublic, she explained that AI tools and techniques are proving themselves beyond the AI ​​hype that exists in the market.

Profile photo of Sarah Taraporewalla.
Sarah Taraporewalla, Director of Enterprise Modernization, Platforms and Cloud, Thoughtworks Australia.

“To get on the Technology radarOur own teams need to use it so we can have an opinion on whether it will be effective or not,” she explained. “What we’re seeing around the world in all our projects is that we can generate about 40% of the items we’re talking about from work that’s actually happening.”

New AI tools and techniques are quickly put into production

Thoughtworks’ Technology Radar is designed to track “interesting things” that the consultancy’s global Technology Advisory Board has discovered happening in the global software engineering space. The report also assigns them a rating that tells technology buyers whether they should “adopt,” “try,” “review,” or “stick” with these tools or techniques.

According to the report:

  • Adopt: “Blips” that companies must take into account.
  • Process: Tools or techniques that Thoughworks believes are ready to use, but not as proven as those in the adoption category.
  • Treasures: Things to look at closely, but not necessarily a test yet.
  • Delay: Proceed with caution.

The report gave retrieval-augmentedgeneration “adopt” status as “the preferred pattern for our teams to improve the quality of responses generated by a large language model.” Meanwhile, techniques such as ‘using LLM as a judge’ – where one LLM is used to evaluate the responses of another LLM, which requires careful design and calibration – were given ‘trial’ status.

While AI agents are new, the GCP Vertex AI Agent Builder, which allows organizations to build AI agents using a natural language or code-first approach, also entered “trial status.”

Taraporewalla said tools or techniques must have already entered production to be recommended for ‘trial status’. Therefore, they would represent success in actual practical use cases.

“So when we talk about this Cambrian explosion in AI tools and techniques, we’re actually seeing it within our teams themselves,” she said. “In APAC, that’s representative of what we’re seeing from customers, in terms of their expectations and how willing they are to cut through the hype and look at the reality of these tools and techniques.”

TO SEE: Will the availability of power derail the AI ​​revolution? (TechRepublic Premium)

The rapid adoption of AI tools is creating anti-patterns

According to the report, the rapid adoption of AI tools is starting to create anti-patterns – or bad patterns across the industry that lead to poor outcomes for organizations. In the case of coding support tools, an important anti-pattern has emerged: the reliance on coding support suggestions from AI tools.

“One anti-pattern we see is the reliance on the answer that is spit out,” Taraporewalla said. “So while a co-pilot will help us generate the code, if you don’t have those expert skills and the human in the loop to evaluate the response that comes out, we run the risk of overloading our systems.”

The Technology Radar highlighted concerns about code quality in the generated code and the rapid growth of codebases. “Code quality issues in particular highlight an area of ​​ongoing commitment from developers and architects to ensure they are not drowning in ‘working, but terrible’ code,” the report said.

The report issued a “hold” on replacing pair programming practices with AI, with Thoughtworks noting that this approach aims to ensure AI helped rather than scrambling codebases with complexity.

“Something we strongly advocate for is clean code, clean design and testing, which reduces the total cost of ownership of the code base; where we rely too much on the answers the tools provide… it won’t help support the longevity of the code base,” Taraporewalla warned.

She added: “Teams just need to double down on the good engineering practices we always talk about – things like unit testing, fitness functions from an architectural perspective and validation techniques – to make sure it’s the right code that is used. comes out.”

How can organizations deal with changes in the AI ​​toolscape?

It’s critical that organizations focus on the problem first, rather than the technology solution, to use the right tools and techniques without getting carried away by the hype.

“The advice we often give is figure out what problem you’re trying to solve and then figure out what might be around it from a solution or tool perspective to help you solve that problem,” Taraporewalla said.

AI management will also have to be a continuous and ongoing process. Organizations can benefit from building a team that can help define their AI governance standards, train employees, and continuously monitor changes in the AI ​​ecosystem and regulations.

“Having a group and a team committed to this is a great way to scale this across the organization,” Taraporewalla said. “So you’re making sure that both guardrails are placed correctly, but you’re also allowing teams to experiment and see how they can use these tools.”

Companies can also build AI platforms with integrated governance features.

“You could codify your policies into an MLOps platform and have that as a base layer for the teams to build on,” Taraporewalla added. “That way you’ve narrowed down the experiments and you know which parts of that platform need to evolve and change over time.”

Experimenting with AI tools and techniques could be rewarding

Organizations experimenting with AI tools and techniques may have to change what they use, but according to Thoughtworks, they will also expand their platform and capabilities over time.

“I think when it comes to return on investment… if we have the testing mindset, we’re not just using these tools to do our job, but also looking at the elements that we’ll continue to build on our platform as we move forward, as our base,” Taraporewalla said.

She noted that this approach could allow organizations to get more value from AI experiments over time.

“I think the return on investment will pay off in the long run – if they can continue to look at it from the perspective of, what parts are we going to bring to a more common platform, and what are we learning from a foundation perspective ? that we can turn that into a positive flywheel?”