Generative AI relies on the collection and analysis of vast amounts of data, which raises concerns about privacy. It’s important for businesses to be transparent about how they are collecting and using customer data, and to give users the option to opt out of data collection if needed. Getty Images, known for its historical and stock photos, has sued AI image generation Stability AI, the maker of Stable Diffusion, for copyright infringement. Getty alleges that the company copied over 12 million of its images to train its AI model ‘without permission or compensation’.
Unlike traditional AI systems that rely on predefined rules, generative AI models learn from vast amounts of data to generate content. This is a field of computer science that focuses on creating what appear to be intelligent machines that are capable of performing tasks that have usually required human intelligence. AI involves developing algorithms and systems that can learn from data, recognize patterns, and make decisions or predictions based on those algorithms or systems. Beyond existing data protection laws, government has no oversight over how data, which is entered into web-based generative AI tools, is then used. Therefore, you should not put information into generative AI tools that, if compromised or lost, could have damaging consequences for individuals, groups of individuals, an organisation or for government more generally.
In addition, sector-specific frameworks for governance and oversight can affect what ‘responsible’ AI use and governance means in certain contexts. Additionally, laws that apply to specific types of technology, such as facial recognition software, online recommender technology or autonomous driving systems, will impact how AI should be deployed and governed in respect of those technologies. Before using generative AI in business processes, organisations should consider whether generative AI is the appropriate tool for the relevant task. Factors such as cost will also have a role to play here, with the cost of generative AI system based searches currently far outweighing the cost of using, for instance, internet search engines. Artificial intelligence models can ‘learn’ from data patterns without human direction through machine learning.
The table below indicates the main types of generative AI application and provides examples of each. Generative AI is a broad concept that can theoretically be approached using a variety of different technologies. In recent years, though, the focus has been on the use of neural networks, computer systems that are designed to imitate the structures of brains. It has published the framework shortly after the Cabinet Office produced guidance for civil servants, saying they should be open to using generative AI but also cautious about the information they enter.
Inma Martinez has enjoyed a decorated career, fostering a reputation as an expert in digital technology, machine learning and artificial intelligence. Named as one of the Top 20 Women Changing The Landscape of Data, Inma’s reputation as a digital pioneer has also seen her become a Member of the Expert genrative ai Group at the Global Partnership on Artificial Intelligence. Remember that training the AI is largely a human activity – if the human training the AI to recognise, say, cats has a blind spot, if (perhaps) they regard a cat without a tail as not a real cat, then the AI will not recognise Manx cats.
Some commentators have blamed at least some of this on the company’s– and particularly Zuckerberg’s – focus on its leap into the metaverse – a concept that has, as yet, not been enthusiastically adopted by the public. Efforts are being made to develop technologies to detect and prevent deepfakes, but their effectiveness remains limited as the technology continues to evolve rapidly. In recent months there have been a number of instances of deepfakes have been created using generative AI. You can also manually watch for clues that a text is AI-generated – for example, a very different style from the writer’s usual voice or a generic, overly polite tone. For example, a chatbot like ChatGPT generally has a good idea of what word should come next in a sentence because it has been trained on billions of sentences and “learnt” what words are likely to appear, in what order, in each context.
A prolific businessman and investor, and the founder of several large companies in Israel, the USA and the UAE, Yakov’s corporation comprises over 2,000 employees all over the world. He graduated from the University of Oxford in the UK and Technion in Israel, before moving on to study complex systems science at NECSI in the USA. Yakov has a Masters in Software Development.
Work continues to develop our understanding including DfE AI in Education, and NCSC articles on the subject. This guidance also covers the above, and other forms of generative AI, including systems such as DALL-E which generates images based on text and BLOOM which generates computer code. ChatGPT and Google’s Bard are publicly available web based versions of generative AI, that allow users to enter text and seek a view from the system, or to ask the system to create textual output based on a given subject. They allow individuals to summarise long articles, get an answer of a specific length to a question, or have code written for a described function. If your module tutor has specifically permitted you to use generative AI for your assessment, you should follow their guidance on ensuring that this use is acknowledged and cited correctly.
Furthermore, AI can produce unique and appealing product descriptions, improving the site’s SEO and increasing sales conversions. Using data about customer behaviour and preferences, AI can generate personalised email marketing campaigns, product recommendations, and customer service responses. Based on historical data, generative AI models can predict financial trends and market behaviour. The AI can generate reports and insights, offering business leaders a significant edge in decision-making, which leads to more strategic investments and cost savings. Generative AI can simulate the expertise of financial advisors, delivering tailored advice to customers based on their unique financial situations. AI can generate tailored investment strategies or retirement plans by synthesising vast amounts of financial data and client information.
But it’s a similar concept, providing a public-facing chatbot to assist in search results. The ability to create entire near-perfect documents, articles, code, images, videos, music and audio in seconds, not hours. Whether this latest iteration of AI applications will be the end of us as a species is a topic for another time. But with the sudden rush to adopt this new technology into our lives and businesses, many have been caught unaware of its history, uses, benefits and risks.
Right now, there’s simply a lack of policies, policy enforcement, and monitoring on the subject, and CISOs and security practitioners need to work together to put appropriate safeguards in place. For instance, too often, we read that people have input sensitive information about their company or customers into an AI tool without checking whether it’s the right thing to do. The AI system can’t tell the user that, but it can learn from the input and could inadvertently learn something you don’t want it to. Remember, with no guarantee on how LLMs learn, we cannot be 100% certain we aren’t sharing trade secrets with the machine.
They similarly underpin many image generation tools such as Midjourney or Adobe Photoshop’s generative fill tools. In this explainer we use the term ‘foundation models’ – which are also known as ‘general-purpose AI’ or ‘GPAI’. We have chosen to use ‘foundation models’ as the core term to describe these technologies. We use the term ‘GPAI’ in quoted material, and where it’s necessary in the context of a particular explanation.