The last few years have seen a flurry of news and marketing around generative AI. So much so that many organizations are considering the use of AI tools in their business.
The Large Language Model (LLM) ChatGPT has been the most notable with multiple iterations coming out since its debut in November of 2022. LLMs like ChatGPT utilize data centers of GPUs to “train” models by sifting through massive amounts of data to find patterns which help establish its “neural network.” The model can leverage this network to generate various content from essays to code and converse with users in a very human like manner unlike any chat bots before it.
The Number of Generative AI Models Keep Growing
ChatGPT and offshoots are the most used for modern chat bots. Microsoft built upon OpenAI’s GPT model to develop Copilot which is added to the latest version of Windows as well as variations for other applications and services such as Teams and Github. Apple is working with OpenAI to integrate ChatGPT. Google has pursued their own AI model with Gemini.
Other generative AI models have come about such as Midjourney and Total Diffusion to produce images with varying styles using text prompts. Adobe has incorporated a generative AI tool in their photoshop products called Firefly.
What Makes AI Models so Popular?
The success of the AI models and potential to assist with everyday tasks have made them very popular, and ways to leverage them are being sought out by various companies. But the success of AI is not without its controversies. The large amount of data needed to train models is pulled from publicly accessible internet sites, while some have leveraged their access to users’ personal data on various platforms to facilitate the development of models.
AI models – much like the internet on which they were trained – can be convincingly wrong with errors that can arise as well as phenomena known as “hallucinations.” These hallucinations fabricate events and information that is completely fictional. On more than one occasion, a lawyer argued a case using case law references provided by ChatGPT only to find out to their dismay that the case law never existed.
How to Avoid Business Data Collection and Exposure
Many of the free versions of AI assistants, including Microsoft’s Copilot, further train their model off of their users’ data and inputs. To avoid this data collection and exposure, a paid license must be purchased from Microsoft. Midjourney is designed to be public, and users have to flag inputs to not have them published publicly.
Consider Privacy Policy and Configuration When Using AI
It is important to consider privacy policy and configuration when determining use cases for an AI product. Should it be used to splice in employee photos for a themed company yearbook? Digest important business plans and data for internal documents or presentations? Assemble a marketing slick for a handout or website? Never expose protected information such as trade secrets, sensitive code, personally identifiable information (PII), etc. to any AI tools.
Companies such as Apple and Samsung banned the use of ChatGPT internally early on due to concerns about trade secret leaks. Samsung actually had sensitive code exposed when an employee fed it into ChatGPT. Data disclosures have also happened by publicly accessible chatbots with access to sensitive internal data that provided it to external parties upon demand.
7 Questions to Ask Before Implementing AI in your Business
Here are some questions to ask yourself if you are looking at utilizing an AI tool or add-on in your organization:
- Are they permitted to be used?
- Will it be the paid or free version?
- What is their data/privacy policy?
- What data will it have access to?
- What types of information can be entered in to it?
- Can you verify the authenticity of the output?
- Does the product fit the use case well and provide value?
Ready to Use AI? Establish Clear Guidelines
The applications as well as risks associated with these AI tools will become more apparent over time, especially as their capabilities evolve and features expand. Be deliberate in your implementation and establish a company policy on how these tools can be used. Ensure the policy outlines specific guidelines and communicate the policy to employees, so they understand the guidelines before they begin interacting with AI.
Ready to Learn More?
We’re here to help organizations of all sizes keep their data secure and operations protected. If you have questions or want to learn more about implementing AI responsibly, reach out to our team today!