- Drifft Newsletter
- Posts
- Making AI Productive pt 2 – An intro to having AI handle your office grunt work
Making AI Productive pt 2 – An intro to having AI handle your office grunt work
How language models, chains, and agents structure new software workflows
💡What’s New: How to make language models do your office work.
🤔 Opinion: I am suffering from model fatigue.
🛠️ Tools & Data: Opensource frameworks to make your AI productive.
💡What’s new
(This is the second part in a series begun last week on becoming productive with AI. We discussed the different types of AI “models” based on the type of information they process and what they do.)
Today we focus on Large Language Models (LLMs) like ChatGPT. How do you get these to do useful work?
First, let’s dispel any wishful thinking. Forget about having AI do all your work. Setting up the AI is a challenge for beginners. Like driving a complex piece of machinery, you have to know some things, especially “prompt engineering”.
Further, there’s a lot of effort and cost that goes into getting these models to perform, and even more to make them efficient. IMO, most junior minors–even business units in larger mining enterprises–should approach building custom AI tools with caution.
However, if you want to eliminate wasted time and money, you can automate a lot of back-office work.
Start with off-the-shelf solutions. You’ll do better and go farther at first. You can use either ChatGPT or opensource (for which I’d recommend Together.ai or ollama.ai).
First, I use LLMs for simple tasks.
Use LLMs to draft letters that handle legal disputes (see opinion section).
Use LLMs to learn new skills and topics.
Use LLMs to brainstorm ideas (or argue a point to get clarity).
Next, I use LLMs to perform complex tasks.
Get LLMs to behave like an expert.
Get LLMs to answer junk phone calls and answer emails.
Get LLMs to pull elements I am looking for out of raw text.
Get LLMs to manage entries on my calendar.
Once I had mastered LLMs, I started giving them access to software tools.
LLMs can interact with / use existing software and APIs.
LLMs can write and execute new code to perform some new function.
The advanced uses for LLMs today are chains and agents.
Chains are structured LLM workflows, a model in a predefined set of tasks.
LLM chains can perform web searches and summarize the data.
LLM chains can draft, test, and post new code to my GitHub.
LLM chains can manage my AWS compute resources automatically.
Agents are unstructured LLM workflows that let the LLM figure out how to fulfill your request. (This is the area I’m currently building in.)
LLM agents can handle complex web research tasks for you.
LLM agents can help you write good questions.
LLM agents can serve as your personal assistant (exp: design your new website, etc).
Using these tools effectively requires one to adopt a new mindset. The old way of working with legacy software was to download a set of widgets packaged under a brand. You’d provide all the inputs and manage each step in the workflow to get what you wanted with maximum control. The new work model is to prompt LLMs and let the models > chains > agents shape the output for you by figuring out what to do.
Welcome to the future!
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Related News
A new foundation model from Jua for earth sciences and physics.
Zambia begins its own aerial geological survey for minerals.
Kobold Metals delivers! – a huge copper deposit discovered in Zambia.
Check out a brand new set of youtube tutorials on resource characterization in South Australia.
URSA Space launches iron ore stockpile tracking.
Maybe you can expect a mining boom like this in your town.
🤔 Opinion
I am tired of being marketed the latest AI foundation models. Are you?
Each week I wade through hundreds of posts pushing the latest startup’s model and how it’s going to beat NVIDIA or Google. I saw another just this morning.
5 years ago when I started developing at BMW, hardly any CTO was investing in AI like they are today. Now that the money dam has burst, it seems like everyone has a model. Chasing the green, I get it. But there’s a problem.
Most newly-minted AI researchers and companies have little industry experience. You can see it in job postings–calls for “recently graduated PhD.” Seriously? You want to build industry-specific AI tools with people who have 0 experience in Whatever industry?
I find most new products are things that sound useful but are unusable, products in search of a problem. The builders often don’t consider whether the market will use it, or how. In fact, they usually don’t know the market, the practitioners.
That brings us to the problems with most of these AI models today, including opensource. Industry has almost 0 experience with them. Compute infrastructure is a challenge. Performance is often finicky. And usability? Hahahahaha.
From my experience and conversations with professionals, most will not take the time to learn new tools while money is being made elsewhere. That’s why I think there are few actual results to show for all the AI hype.
Many execs, including those at MSFT, are still wondering how to make their models profitable. (Smaller models, maybe?) And despite the claims of opensource leading the race, I have yet to see opensource versions as effective as paid competitors.
My solution to the problem: Stop building…long enough to go talk to the market. Get to know them and find out what they need. Then go build.
Is this too obvious?
🛠️ Tools and Data
Framework for building and using your custom AI agents. | ![]() |
Baby AGI: https://github.com/yoheinakajima/babyagi an AI-powered task management system that uses OpenAI and Pinecone APIs to create, prioritize, and execute tasks. | ![]() |
Autogen: https://github.com/microsoft/autogen MSFT framework for developing LLM applications using multiple agents that can converse with each other to solve tasks. | ![]() |
Ollama: https://github.com/ollama/ollama Get up and running with large language models locally. | ![]() |
LM Studio: https://github.com/lmstudio-ai Run LLMs (huggingface, etc) on your laptop, entirely offline in a chat UI or local server. | ![]() |
Thanks for reading! Want me to look into a particular topic? Email your suggestions and and I will dig.