AI is today’s tech arms race, with organizations facing immense pressure from all sides to quickly roll out AI solutions. Business users are clamoring to use AI to drive just about everything from workforce productivity to new revenue streams, and data scientists are eager to be on the leading edge of AI innovation. Whether to create intelligent bots to handle customer inquiries, quickly generate new code, predict product demand, or advise employees in their everyday tasks, the potential applications of AI are endless.
In this hyper-competitive landscape, it’s tempting to give into the “move fast and break things” mentality that has driven most technological innovation in the digital era. But is this the right approach to AI adoption? AI, including generative AI, holds immense promise, but it also poses inherent risks ranging from data privacy to security threats to regulatory noncompliance. Speed must be weighed against long-term impacts on IT infrastructure, security, and data. Business leaders want rapid time to value, but not at the risk of costly mistakes. As a recent McKinsey article highlighted, 63% of business respondents rated generative AI deployments as a high priority, yet 91% said they didn’t feel prepared to deploy it in a responsible manner.
A juggling act
To move from experimentation to successful AI deployment at scale—one that drives value while minimizing risks—the IT department must play a critical role from the earliest stages. As the bridge between data science, technology, and business teams, IT can align the interests of all three groups to help shepherd cutting-edge innovation into mature and stable programs. They can ensure that business and data science teams have the resources they need to accelerate innovation without overstretching existing systems or incurring huge costs. In other words, they can help AI move fast without breaking things!
To do this, IT teams need to juggle several key priorities:
Managing AI’s consumption of data. AI must be fed vast quantities of data, and data scientists will always want the freshest data they can get. As the organization’s guardian of data security and integrity, IT must ensure that AI models can access and consume the data they need while protecting robust governance and avoiding impacts on critical IT systems. This will help prevent data loss or contamination while securing personal data and intellectual property. In addition, as those responsible for cybersecurity, IT teams need to take steps to protect not only data, but also AI models, from theft, corruption, and abuse.
Avoiding technical debt. With their holistic view of resources, IT teams can ensure AI doesn’t contribute to technical debt. To solve a specific problem, business users are often tempted to build a dedicated data pipeline to feed individual AI models, creating more layers of technology and data silos. The resulting shadow IT— often invisible to IT teams—exists outside the management and governance structures intended to control costs, underpin security, and minimize risk. They also consume valuable resources for data pipeline maintenance. As such, they may hold back the capacity for deployment of AI at scale.
Supporting AI deployment. IT teams must be involved in deployment and operationalization to ensure AI projects don’t disrupt core operations. As noted in the previous article in this series, AI will not work in isolation. To deliver value, it must be integrated into existing applications and business processes. IT should play a part in provisioning infrastructure, integrating applications, and ensuring security as new models are embedded into the business.
Managing SLAs. IT is uniquely positioned to to create and manage the SLAs needed to ensure lasting value. They need to ask questions about how existing resources and planned investments can be leveraged to support AI. Ongoing resource demands of running AI models are frequently overlooked. For example, how will data be accessed safely and securely without costly and time-consuming data moves?
Performing ongoing monitoring. IT teams will also deliver the necessary integration between AI and existing (or new) business applications. AI is not like a “normal” application, and both business users and IT will have to monitor AI models for drift and hallucinations and be ready to intervene to correct any issues.
The role of the data analytics platform
The most effective AI projects—those that scale to value without risking security, governance, or investment plans—will require tight collaboration among IT teams, business teams, and data scientists. But they will also require a tried and trusted data and analytics platform. Teradata VantageCloud, the complete cloud analytics and data platform for AI, provides the flexibility and scalability needed to enable the rapid roll-out of AI projects while enforcing governance, security, and integrity. VantageCloud’s integrated ModelOps capabilities allow rapid innovation and efficient reuse of resources, enabling IT teams to satisfy the “need for speed” of business teams and data scientists while defending safe, secure, and robust operations.
With Teradata and the right approach, businesses can prove it’s possible to move fast—but not break things.