Alright, my pals, I’m again with one other publish based mostly on my learnings and exploration of AI and the way it’ll match into our work as community engineers. In in the present day’s publish, I wish to share the primary (of what is going to probably be many) “nerd knobs” that I believe all of us ought to pay attention to and the way they may affect our use of AI and AI instruments. I can already sense the thrill within the room. In spite of everything, there’s not a lot a community engineer likes greater than tweaking a nerd knob within the community to fine-tune efficiency. And that’s precisely what we’ll be doing right here. Effective-tuning our AI instruments to assist us be more practical.
First up, the requisite disclaimer or two.
- There are SO MANY nerd knobs in AI. (Shocker, I do know.) So, in the event you all like this sort of weblog publish, I’d be comfortable to return in different posts the place we have a look at different “knobs” and settings in AI and the way they work. Effectively, I’d be comfortable to return as soon as I perceive them, no less than. 🙂
- Altering any of the settings in your AI instruments can have dramatic results on outcomes. This consists of growing the useful resource consumption of the AI mannequin, in addition to growing hallucinations and reducing the accuracy of the data that comes again out of your prompts. Contemplate yourselves warned. As with all issues AI, go forth and discover and experiment. However achieve this in a protected, lab atmosphere.
For in the present day’s experiment, I’m as soon as once more utilizing LMStudio operating domestically on my laptop computer somewhat than a public or cloud-hosted AI mannequin. For extra particulars on why I like LMStudio, take a look at my final weblog, Making a NetAI Playground for Agentic AI Experimentation.
Sufficient of the setup, let’s get into it!
The affect of working reminiscence measurement, a.okay.a. “context”
Let me set a scene for you.
You’re in the midst of troubleshooting a community difficulty. Somebody reported, or observed, instability at a degree in your community, and also you’ve been assigned the joyful process of attending to the underside of it. You captured some logs and related debug info, and the time has come to undergo all of it to determine what it means. However you’ve additionally been utilizing AI instruments to be extra productive, 10x your work, impress your boss, you already know all of the issues which can be occurring proper now.
So, you determine to see if AI may also help you’re employed by way of the info quicker and get to the basis of the difficulty.
You fireplace up your native AI assistant. (Sure, native—as a result of who is aware of what’s within the debug messages? Finest to maintain all of it protected in your laptop computer.)
You inform it what you’re as much as, and paste within the log messages.

After getting 120 or so strains of logs into the chat, you hit enter, kick up your ft, attain in your Arnold Palmer for a refreshing drink, and look ahead to the AI magic to occur. However earlier than you possibly can take a sip of that iced tea and lemonade goodness, you see this has instantly popped up on the display screen:


Oh my.
“The AI has nothing to say.”!?! How may that be?
Did you discover a query so tough that AI can’t deal with it?
No, that’s not the issue. Take a look at the useful error message that LMStudio has kicked again:
“Attempting to maintain the primary 4994 tokens when context the overflows. Nonetheless, the mannequin is loaded with context size of solely 4096 tokens, which isn’t sufficient. Attempt to load the mannequin with a bigger context size, or present shorter enter.”
And we’ve gotten to the basis of this completely scripted storyline and demonstration. Each AI instrument on the market has a restrict to how a lot “working reminiscence” it has. The technical time period for this working reminiscence is “context size.” For those who attempt to ship extra information to an AI instrument than can match into the context size, you’ll hit this error, or one thing prefer it.
The error message signifies that the mannequin was “loaded with context size of solely 4096 tokens.” What’s a “token,” you surprise? Answering that may very well be a subject of a completely totally different weblog publish, however for now, simply know that “tokens” are the unit of measurement for the context size. And the very first thing that’s finished if you ship a immediate to an AI instrument is that the immediate is transformed into “tokens”.
So what will we do? Effectively, the message offers us two doable choices: we are able to improve the context size of the mannequin, or we are able to present shorter enter. Generally it isn’t an enormous deal to offer shorter enter. However different occasions, like after we are coping with giant log information, that possibility isn’t sensible—all the information is necessary.
Time to show the knob!
It’s that first possibility, to load the mannequin with a bigger context size, that’s our nerd knob. Let’s flip it.
From inside LMStudio, head over to “My Fashions” and click on to open up the configuration settings interface for the mannequin.


You’ll get an opportunity to view all of the knobs that AI fashions have. And as I discussed, there are lots of them.


However the one we care about proper now could be the Context Size. We are able to see that the default size for this mannequin is 4096 tokens. Nevertheless it helps as much as 8192 tokens. Let’s max it out!


LMStudio supplies a useful warning and possible motive for why the mannequin doesn’t default to the max. The context size takes reminiscence and sources. And elevating it to “a excessive worth” can affect efficiency and utilization. So if this mannequin had a max size of 40,960 tokens (the Qwen3 mannequin I exploit generally has that top of a max), you won’t wish to simply max it out straight away. As a substitute, improve it by a bit at a time to search out the candy spot: a context size large enough for the job, however not outsized.
As community engineers, we’re used to fine-tuning knobs for timers, body sizes, and so many different issues. That is proper up our alley!
When you’ve up to date your context size, you’ll have to “Eject” and “Reload” the mannequin for the setting to take impact. However as soon as that’s finished, it’s time to benefit from the change we’ve made!


And have a look at that, with the bigger context window, the AI assistant was capable of undergo the logs and provides us a pleasant write-up about what they present.
I significantly just like the shade it threw my approach: “…take into account in search of help from … a professional community engineer.” Effectively performed, AI. Effectively performed.
However bruised ego apart, we are able to proceed the AI assisted troubleshooting with one thing like this.


And we’re off to the races. We’ve been capable of leverage our AI assistant to:
- Course of a big quantity of log and debug information to establish doable points
- Develop a timeline of the issue (that will probably be tremendous helpful within the assist desk ticket and root trigger evaluation paperwork)
- Determine some subsequent steps we are able to do in our troubleshooting efforts.
All tales should finish…
And so you’ve got it, our first AI Nerd Knob—Context Size. Let’s overview what we discovered:
- AI fashions have a “working reminiscence” that’s known as “context size.”
- Context Size is measured in “tokens.”
- Oftentimes occasions an AI mannequin will assist a better context size than the default setting.
- Rising the context size would require extra sources, so make adjustments slowly, don’t simply max it out fully.
Now, relying on what AI instrument you’re utilizing, chances are you’ll NOT have the ability to regulate the context size. For those who’re utilizing a public AI like ChatGPT, Gemini, or Claude, the context size will depend upon the subscription and fashions you’ve got entry to. Nonetheless, there most positively IS a context size that can issue into how a lot “working reminiscence” the AI instrument has. And being conscious of that reality, and its affect on how you need to use AI, is necessary. Even when the knob in query is behind a lock and key. 🙂
For those who loved this look beneath the hood of AI and wish to find out about extra choices, please let me know within the feedback: Do you’ve got a favourite “knob” you want to show? Share it with all of us. Till subsequent time!
PS… For those who’d prefer to study extra about utilizing LMStudio, my buddy Jason Belk put a free tutorial collectively referred to as Run Your Personal LLM Regionally For Free and with Ease that may get you began in a short time. Test it out!
Join Cisco U. | Be part of the Cisco Studying Community in the present day without spending a dime.
Study with Cisco
X | Threads | Fb | LinkedIn | Instagram | YouTube
Use #CiscoU and #CiscoCert to affix the dialog.
Learn subsequent:
Share:





