• Artificial Intelligence

The hitchhiker’s guide to Jupyter (part 2/n)

By Adrian James18 April 20246 min read

AI is coming for us!

It seems the press has become obsessed (yet again) with the existential threat to our existing posed by AI. At least that’s how it feels with my personal newsfeed handed to me by AI recommendation engines based on my many searches for AI topics. Having recently spent a lot of time looking at AI, I feel I need to weigh in with a slightly more balanced viewpoint.

We don’t have AGI

Artificial General Intelligence (AGI) may someday threaten us like it does in The Terminator or The Matrix, but we are a long way from AGI. It is also debatable as to whether it sees us as a threat. Far more likely to me is that many AGIs are developed in competition, with vastly different budgets. Some will see each other as a threat, many will just be rubbish and fall apart at the first hurdle. Again, it brings me back to thoughts of Douglas Adams and the Sirius Cybernetics Corporation. Talking doors with genuine people personalities.

OK, so it doesn’t need to be an existential threat, right? The rapid increase in development and availability of Large Language Models (LLMs) and generative AI could create a productivity boost so enormous that many thousands of people find themselves out of a job. Let’s unpick that a little.

What is AI then?

AI is a bunch of differential calculus performed at scale on matrices of data, with weighted inputs and loss equations. Doesn’t sound quite so threatening or so interesting now does it? How does that equate to the generative models we have today I hear you wanting to ask? Think of it this way…

When you prompt a generative visual model to draw you a picture of a face in the style of Rembrandt, it has a catalogue of Rembrandts. It has pulled them apart again and again and created a graph of all the similarities between them. It then uses trial and error, very, very quickly to create something that fits that model.

Image of a cartoon drawing a rembrandt visual

AI generated text works the same way. It has been trained on massive amounts of text to create mathematical models about how the words or syllables relate to each other and done the same with semantics, proper nouns, and the like to highlight the meaning. It’s amazing, but it’s not AGI. When it generates text, it is scouring its data and using trial and error to generate something like the text it has seen before, but within the constraints you give it.

I think of these models as very efficient curators with very, very efficient indexes to bring back answers from the body of information it holds almost instantaneously. They are not original in the sense of the ‘spark of creativity’, but they are very good at pulling disparate information together from wide sources, if that information exists. If it doesn’t it may ‘hallucinate’ and whether that approximates original thought can be debated another day, it’s not usually useful in a work context.

How can generative models take jobs?

I don’t think this is an AI question. That heading was more clickbait, sorry. If a person takes a request from someone and delegates it to someone else, checks it, adds their name to it, and passes it back to the requestor, then my guess is they use generative models in much the same way. This is a question of value.

If we use AI to do lots of low value tasks with increased frequency, we invite automation to replace our function in the value chain. This is normal. As a species we have always sought to increase efficiencies in our work. Farming replaced hunting, mechanisation replaced human labour, computers replaced repetitive administration tasks. You may see this as good, bad or both but in context, generative AI is another step on a well-trodden road.

How can we make our jobs AI safe?

If I asked you, “Do you give 100% to every task?” would your honest answer be yes? When you are asked to produce a report do you extensively check every source, cite them all in references, and create a nuanced and balanced view of all the options available with their risks and assumptions made along the way? Almost certainly not every time.

We live in a time-poor era. Working days are long, pressures on our time are high, experienced help is hard to find.

Let’s ask a different question.

How can I use AI to improve my productivity?

Remember that large language models are trained on massive sets of data. Remember also that they are trained on different sets of data. You can use that to your advantage. When you ask a LLM for an answer, be the value add to the information you get back. You need to turn the information into knowledge. Your experience is a lens to apply to that information, you provide the context that gives it value.

Here are some tips for using LLMs to increase your productivity and the quality of the work you derive from it:

  • Use multiple LLM tools. You will get different answers from different models. Some are more up to date than others, they are tuned differently, and they vary from case to case in the quality they return. They are compound models so the way they extract sentiment, semantics, context, and named entities will vary.
  • Ask the questions in different ways. Try being more specific about context, geography, how recent you want the sources, sources you want to include or exclude. Keep the questions within the chat context. This helps the chatbot you are engaged with tune it’s input parameters to the context of your discussion. If you start a new chat each time the context is lost – remember it’s just a statistical model in a container somewhere. If you walk away, it gets turned off. Your chat history is your way of keeping the context of the discussion. Try also to collate your discussion into a more precise question and start a new chat with it on the same or another model. Use it to cross reference your answers.
  • Ask for sources of information and then ask the models to include those sources in the information you ask it to provide. This will help you cast a wider net over the available information.
  • Check the answers you get back. Responses can include “hallucinations” – shorthand for the AI made it up! Check the references are correct, check facts and figures are correct, check the quotes are correct and from the correct people.
  • Ask for references and citations, but don’t just blindly add them to your work. Check them for relevance and context and know what’s in them. You may get asked about them! You can ask the language models to summarise the information in the references. If they can’t, they may not have access to them at the point of training and this could be a hallucination red flag.
  • Be respectful. Some of the information you are given may contain sensitive material, copyright, or information that is private. The training data is huge, and mistakes are made.

Instead of thinking about LLMs as a threat to your job, see them as a threat to your backlog.

If you want to discuss the safe implementation of AI into your workplace, including generative models get in touch with Methods. We can help with organisation change, technology design, implementation, and service management.

“Share and enjoy!”*

 

*Quote from Douglas Adams, Hitchhiker’s Guide to the Galaxy