Categories
LLMs

How I use LLMs in my day-to-day: The good, the bad, the ugly

Introduction

There is a lot of buzz going on around LLMs right now (and rightfully so). LLM capabilities are advancing rapidly and they get better and better with each new model release. For some of us, LLMs have become like mobile phones: it’s a technology we use everyday for different purposes.

I wanted to write this post as I feel that some people may be skeptical about using LLMs, since they heard LLMs can sometimes produce nonsense. This is true; LLMs do produce nonsense at times, but more often than not the positives outweigh the negatives.

The target audience of this post are people that have heard of LLMs, but haven’t really used them yet, besides maybe trying them out a few times via a web browser. In this post, I will try my best to enumerate all the ways I use LLMs in my day-to-day, as well as to point out what LLMs can’t do yet on a sufficieint level of quality (at least from my perspective). Most of these use cases are done by me interacting with a web browser, but some are specific to programming and relate to Cursor. I won’t go in great technical detail or explain why LLMs are good or bad at something; I’ll leave that for another blog post.

I divided the blog post in 3 major sections: the good, the bad and the ugly. Each section describes use cases where LLMs shine, use cases where LLMs are still bad at and use cases that are just inconvenient.

I hope that by the end of this blog post you will be convinced to give LLMs a shot (or another shot). Let’s begin!

The good

Proofreading emails and messages

Note: Below I talk about emails, but the same applies to messages. Usually, emails are more formal so I tend to proofread them more often.

Without LLMs, I sent important or lengthy emails roughly like this:

  1. Write the first draft of the email.
  2. Do some other task.
  3. Re-read my draft from step 1 and revise it.
  4. Send the email.

I found this method to be very useful when sending important emails with technical details and I wanted to make sure my point gets across well (while not bogging down the recipient(s) in unnecessary technical details if there’s no need for that).

With LLMs, my workflow looks like this:

  1. Write the first draft of the email (sometimes LLMs assist me with this as well).
  2. Send the email to an LLM and ask it whether I can get my point across more consicely.
  3. Read what the LLM suggests.
  4. Send the email.

In this way, the LLM is like a second person that proofreads my email and suggests improvements. In this way, I don’t have to re-read my emails, but rather I have a “second set of eyes” which read the email, suggest improvements (if any) and I can send it right away.

The prompt I use is something along the lines of:

I am composing an email which is trying to get the following points across:

* Point 1
* Point 2
* Point 3

Here is some additional context: <Context>

Is the email below good? Could it be improved in some way?

"""
<Email goes here>
"""

Sometimes, I even ask the LLM to help me draft an email:

I am composing an email which is trying to get the following points across:

* Point 1
* Point 2
* Point 3

Here is some additional context: <Context>

Could you please draft an email which would get the points accross?

Usually inside the <Context> I describe any additional context (if needed).

Double-checking ideas

Sometimes I use LLMs to double-check some ideas I have.

As a concrete example, here is a prompt I used to double-check my website reorganization ideas:

What do you think about this organization? It doesn't seem right to me... Btw. Blog will just be Blog for now; no additional categories.

"""
Main Navigation:
* Home
* Portfolio (your work)
* Blog (your thoughts/writings)
* Resources (learning materials)
* About
* Contact

Under Portfolio:
* Professional Experience
* Side Projects
* Publications (formal research papers)

Under Blog:
* Machine Learning
* LLMs & GenAI
* Computer Vision
* Career Insights
* Technical Deep Dives

Under Resources:
* Tutorials
* Book Exercise Solutions
* Podcasts
* Certifications

Under About:
* About Me
* Education
* CV/Resume Download
"""

As you can see, it’s almost as if I was talking to a human. The LLM then goes on to propose improvements (if any) to my ideas. I noticed that ChatGPT 4o tends to propose improvements even after 5 or 6 iterations, while Claude Opus 4 seems to tell you it’s excellent in 1 iteration (given that you implemented its suggestions).

Doing research I’d spend an afternoon (or a few afternoons) on

Here I found ChatGPT’s Deep Research very useful for doing research I’d spend an afternoon (or a few afternoons) on. I should note that just recently Anthropic introduced Research functionality to their web interface, so maybe it’s as good as or even better than ChatGPT’s Deep Research.

In any case, I usually ask the LLM to look into stuff that I have on my backlog, but I haven’t attended to them yet. An example would be the following:

I am interested in AI Safety research. I do not have a PhD in computer science, but I do have a master's degree and 4-5 years of experience in AI/ML. I am attaching my CV.

Please look into some research camps (or similar) which would allow me to participate in AI Safety research and would allow me to test how well I would fare at it. I am particulary interested in the camps which I could attend remotely (and ideally part-time).

Usually the original question is followed up with multiple follow-up questions, and after that the LLM gets to work.

I found the quality of the results to vary: sometimes it gives me genuinely good results, but sometimes it gives me results not really relevant for my circumstances. In either case, I at least have a better idea about the topic I’m researching and I have at least some pointers I could start with.

Brainstorming possible solutions

In these examples I’ll use software engineering, but I think this could apply to any field of work (and maybe even to general life problems or situations).

Before LLMs, when I faced a new task that I was unfamiliar with, I did the following :

  1. If available, I would ask someone more senior than me for pointers on how to do what I wanted/needed to do.
  2. If no senior was available, I’d Google around to find ideas for a solution.
  3. I’d implement whatever I found through either step 1 or step 2 (usually I combined them).

Now, with LLMs, my workflow goes like this:

  1. I articulate the task I’m trying to solve and ask an LLM for ideas.
  2. LLM gives me some ideas on how to approach it.
  3. I usually ask follow-up questions and double-check some things to make sure LLM didn’t hallucinate.
  4. I pick a solution and proceed to implement it.

One concrete example: for one of my clients, I was tasked with writing a local LLM and ASR server in C++ using a specific NVIDIA SDK. There were multiple things to consider here:

  • I didn’t write C++ in some time (my focus is primarily Python)
  • Rest of the team used Python
  • The server architecture was something I’d have to consider

In order to brainstorm possible solutions, I asked an LLM (Claude in particular) something along the lines of:

I am tasked with writing a local LLM and ASR inference server with NVIDIA XYZ SDK. The SDK uses C++.

Here are some things on my mind:

* I haven't written C++ in a few years now, so I'd prefer Python if possible
* Other team members use Python
* I am brainstorming possible server architectures; do you have any suggestions?

And then I brainstormed with the LLM. It proved to be really useful and suggested a solution that worked great for everyone on the team.

Creating files and folders for a project

When I started a new project before LLMs, I spent some time creating files and folders for my project and writing boilerplate code.

Now, I usually start Cursor, describe my project in Agent mode (or a markdown file, which I attach to the chat) and it creates the file structure for me. I have to note it’s not perfect and sometimes it tends to overcomplicate things, but overall it’s more good than bad.

Writing boilerplate code

This is closely related to the previous point I mentioned regarding creating files and folders for a project. Usually, there is a lot of boilerplate code to be written (i.e. every FastAPI server has the same structure), so it’s good that I can automate that part as well.

Writing clearly defined bits of code

I find Cursor’s inline edit functionality to be very good for making targeted changes.

For example, say I want to add two more options to command line arguments. What I would do is select the code in question, press Ctrl + K on my keyboard, then write something along the lines of:

Please expand the options list to include two additional options:

* Option 1 - Description of what the option
* Option 2 - Description of what the option

I also use it sometimes if I’m unsure about some specific syntax or class attributes; it’s usually very good there.

Asking codebase-related questions

I find Cursor very useful to get acquainted with a new codebase. I usually load the codebase inside Cursor, enter Ask mode and then I ask questions about the codebase.

I double-check the details with programmers working on the codebase of course, but this gives me some insights I might miss if I was just manually exploring the codebase.

The bad

Cleanly adding features to an existing codebase

If you want to add a feature to an existing codebase, doing it with LLMs can lead to deteriotation of code quality. From my experience, LLMs tend to overcomplicate the solutions and even if they take a simple approach, it usually involves some manual debugging.

For adding features to an existing codebase, I prefer consulting with codebase author(s), then implementing the feature in a way we agree and using LLMs for specific code blocks, as I outlined in the section Writing clearly defined bits of code.

Even if the entire codebase was made with the help of LLMs, it can still introduce some “unclean” code.

For example, have a look at this example from my SWEMentor side project:

Let’s consider two classes:

  • CodeIssue – represents a code issue with its solution and context
  • CodePattern – represents a code pattern with its description and example

Now consider these two method’s signatures:

def add_issue(self, issue: CodeIssue) -> None:

So far so good (although the method could return a bool indicating whether it succeded, not None). But now look at this method:

def add_pattern(
        self,
        name: str,
        description: str,
        example: str,
        category: str,
        tags: List[str]
    ) -> None:

So instead of using the CodePattern class as the argument, it repeats its attributes again. This affects maintanability of code and makes it more “unclean”.

This might look like nitpicking, but it adds up quickly and can lead to a messy codebase.

Hallucinations

I think hallucations improved tremendously with the latest versions of LLMs. I think before it was a much bigger issue that it is now.

However, I still encounter issues with more subtle kinds of hallucinations. Not so much in the sense that the model produces complete jibberish, but rather that it gives me some answers which are way off (like estimates which are 2-3x off from reality, but they seem plausible enough). I consider that to also be a hallucination, albeit a harder to detect one than total nonsense.

Malleability

Try the following experiment: Ask an LLM some question. Then add the following:

Be brutally honest.

I found that with some models (Claude 4 Opus in particular), including the brutal honesty remark can not only affect your answer, but also the tone of the entire conversation. This is something to definitely keep in mind.

The ugly

Having to switch chats when context window limit is reached

Sometimes it happens that I talk to an LLM about something, then think of something else and write it in the same chat. This is how we usually talk as humans: we start with one topic, then switch to another and so on.

However, since LLMs have a context window, they cannot chat with you forever (not in the same chat at least). On multiple occasions I found myself having to summarize a previous chat because the current one got too slow or I would simply exceed the context length and couldn’t go on.

This is annoying because as I said, when talking to humans you don’t really have to mind the context length; with LLMs, you do.

Subtly misunderstanding my question if I don’t provide the entire context

To illustrate this point, let’s say you’re discussing some career-related question with an LLM. The conversation is flowing nicely and you go back and forth a few times. Just as you reach the conclusion, the LLM outputs something like:

...
This applies to you because:

* This is standard for US-based machine learning engineers and conslutants
...

Uh-oh, I see an issue here. The LLM mistakenly thought it was giving advice to a person residing in the US, while I’m from Europe. Realizing that the LLM made some assumptions which are not true and which could alter the conclusion happened to me more than a few times.

This is somewhat related to hallucinations, as the LLM assumes things it’s not explicitly told, but I put it under The Ugly category as it’s just ugly to see that you spent some time going back-and-forth with the LLM over a certain topic, only to realize it made some assumptions which, if they don’t hold, could fundamentally change the answer.

Conclusion

In this blog post, I tried to outline how I use LLMs in my day-to-day. I tried to be as specific as possible and include prompts wherever it made sense, so you can see how LLMs can be used on particular examples.

Hopefully you will give LLMs a shot (or another shot) of accelerating your workflow. If you’re wondering which LLMs to use, I use Anthropic’s Claude and ChatGPT mostly, but you could try other models as well. I recommend you to use the paid versions of the models, as I found they tend to better (especially for ChatGPT, for Claude the free version is good in terms of quality, but you reach your message limit fast).

As a final note, I would say that I went from not using LLMs almost at all 1.5 years ago to using them daily. In my opinion, LLMs are great tools to make you more productive and there’s no harm in trying them. I think if you give them a shot, you will likely find yourself doing your work faster by removing the tedious parts of it and focusing on what you actually enjoy doing.

Categories
Linux Tutorial Series

Linux Tutorial Series – 198 – That’s it!

Here is the video version, if you prefer it:

That’s it. We are done with our Linux journey. I hope it’s been a joyful ride. I hope you have confidence that you have a firm grasp of the basics of Linux and if you need to explore any additional parts of Linux you will have a great sense of where it fits in into the bigger picture.

May the force of the penguin be with you from this point on!

Thank you for reading and I hope you enjoyed the ride!

Categories
Linux Tutorial Series

Linux Tutorial Series – 197 – Recap

Here is the video version, if you prefer it:

Let’s recap what we learned about shell scripting:

  • Shell scripts are files that contain commands (alongside some shell scripting keywords)
  • Use shell scripts to manipulate files; if you find yourself manipulating strings or doing arithmetic work, do something else
  • A line starting with #! is called a shebang
  • A line starting with a # is called a comment – it is ignored by the interpreter and serves only for human understanding of the shell script
  • Use single quotations unless you have a very good reason not to
  • There are special variables – for example, $1 corresponds to the first argument in your script – as well as some other ones
  • Exit codes tell you “how did the command do”
  • If statement follows the logic of “if this then that”
  • Else statement follows the logic of “if this then that, if not (else) then the other thing”
  • Logical operators are used to combine tests together
  • You can test for various conditions in a test
  • Use a case construct instead of a lot of elifs
  • for is used for iteration
  • Command substitution can be used to put the output of the command in a variable or to pass it as input to another command
  • You can read user input with read variableName

I hope you refreshed your memory!

Categories
Linux Tutorial Series

Linux Tutorial Series – 196 – Things I never used in shell scripting

This is the video version, if you prefer it:

There are things I never used in shell scripting, such as sed, awk, basename etc. Since shell scripts are composed mostly of commands, the commands I never used on the command line I have never used in shell scripts as well.

I covered everything I deemed important for shell scripting, but if you encounter something you don’t understand, feel free to use Google or reference (Shotts, 2019)⁠. Although a caveat: (Shotts, 2019)⁠ seems to take the stance of “show a whole lot of features of shell scripting” where I am more biased towards “learn only the essentials of shell scripting and use other programming languages for activities other than file manipulation”. Just a difference to keep in mind.

Thank you for reading!

References

Shotts, W. (2019). The Linux Command Line, Fifth Internet Edition. Retrieved from http://linuxcommand.org/tlcl.php. Part 4 – Writing Shell Scripts

Categories
Linux Tutorial Series

Linux Tutorial Series – 195 – Fixing errors

Here is the video version, if you prefer it:

Fixing errors, also known as troubleshooting, is a process or removing errors from your bash script.

If you adhere to the rule which I laid out before, which was “Use single quotes to enclose values unless you have a concrete reason not to”, you will be fine most of the time. For the times when you are not fine, literally copy and paste the error message you’re getting into Google and you will get some suggestions.

There is a finely written piece on this in (Shotts, 2019)⁠, which again, let me remind you, is free on the World Wide Web, so go read it.

Thank you for reading!

References

Shotts, W. (2019). The Linux Command Line, Fifth Internet Edition. Retrieved from http://linuxcommand.org/tlcl.php. Pages 454-467

Categories
Linux Tutorial Series

Linux Tutorial Series – 194 – Reading user input

Here is the video version, if you prefer it:

You can read user input in shell scripts. (Ward, 2014)⁠

Here is an example of a script which reads user input:

#!/bin/bash

read name

echo "My name is $name"

Notice how I used the double quotes to make the shell expand my variable. My variable, by the way, is called name. read name prompts the user for input and when the user inputs the information (and presses Enter), that input is stored in the variable named name.

Here is how it works:

mislav@mislavovo-racunalo:~/Linux_folder$ ./tutorialScript.sh

Mislav

My name is Mislav

As you already know, you can read user input through arguments that you pass to your shell script. I don’t know which way is the better way and which way is more idiomatic to the shell, but I’d opt for passing values as arguments to the script and then doing checks on the arguments at the beginning of the script. You can also do these checks after reading something in a variable (such as “Is the string that the user inputted empty?”), but I think that doing these checks after each read is tedious; much better to do them at the beginning of the script. Again, not certain which way is idiomatic to the shell and both exist so both must be fine, but I am unsure.

Hope you learned something useful!

References

Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Page 269

Categories
Linux Tutorial Series

Linux Tutorial Series – 193 – Command substitution

Here is the video version, if you prefer it:

You can take a result of a command and store it in a variable or use the result of that first command as an argument for another command. This is called command substitution. (Ward, 2014)⁠

Here is an example of command substitution in a sample script:

#!/bin/bash

LINES=$(grep 'a' aba.txt)

for word in $LINES

do

echo $word

done

We first take every line that has the character a in the file aba.txt and then we store it in the variable LINES. Then we iterate over each word in the variable LINES and we print it. Why does LINES contain words, not lines? It contains lines indeed, but words that make up those lines are separated by a space (you can see so yourself by putting echo $LINES just before the for loop) and since each space is interpreted as a delimiter in a sequence, we get individual words.

Here is the output (along with aba.txt contents):

mislav@mislavovo-racunalo:~/Linux_folder$ cat aba.txt

Mustard is how we transfer wealth

Some stuff Abba Mustard

Mustard Mustard

Mustard Mustard

It's the Mustard

In the Rich Man's World

AB

Mustard is how we transfer wealth

Ab

aB

ab

mislav@mislavovo-racunalo:~/Linux_folder$ ./tutorialScript.sh

Mustard

is

how

we

transfer

wealth

A simple example of using the result of the first command as an input to another command is echo $(ls). We use the output of ls and we echo it.

Thank you for reading!

References

Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Pages 263-264

Categories
Linux Tutorial Series

Linux Tutorial Series – 192 – while and until loops

Here is the video version, if you prefer it:

while and until loops are used for iteration, the same as for loops. However, I won’t teach you these. Why? Because, as (Ward, 2014)⁠ says, if you ever need a while loop (or an until loop), it’s a good time to move to another programming language. I agree.

I will tell you that (“Bash While Loop Examples,” n.d.)⁠ and a Google search will teach you, if you really must use them. But again, I will repeat myself: If you need to use a while or an until loop, switch to another programming language.

Thank you for reading!

References

Bash While Loop Examples. (n.d.). Retrieved February 22, 2020, from https://www.cyberciti.biz/faq/bash-while-loop/

Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Pages 262-263

Categories
Linux Tutorial Series

Linux Tutorial Series – 191 – for loops

Here is the video version, if you prefer it:

for loops are used for iterating. Iterating means going over a sequence of values.

Here is an example of a for loop, modeled after (“Bash For Loop Examples,” n.d.)⁠.

#!/bin/bash

for i in 1 2 3 4 5

do

echo $i

done

This will produce the following output:

mislav@mislavovo-racunalo:~/Linux_folder$ ./tutorialScript.sh

1

2

3

4

5

What just happened? The variable i first took on the value of 1 (the first element in the sequence). Then it echoed its value. Then variable i took the next value in the sequence (the value after the value it currently had). So i was 2. Then 2 got outputted – and so on until the end of the sequence.

You can specify numerical sequences more easily (the above one could have been written as {1..5}) and you can iterate over other sequences (not just numbers, but other things – files for example). I leave you to consult the reference and Google for some further examples.

Hope you learned something useful!

References

Bash For Loop Examples. (n.d.). Retrieved February 22, 2020, from https://www.cyberciti.biz/faq/bash-for-loop/

Categories
Linux Tutorial Series

Linux Tutorial Series – 190 – case statement

Here is the video version, if you prefer it:

Today let’s talk about the case statement. case statement is used to replace a lot of elif statements. Here is our entire script with the same functionality it had before, rewritten using a case statement:

#!/bin/bash

case $1 in

'Hello')

echo 'Hello back to you!'

;;

'Hi')

echo 'Hi!'

;;

*)

echo 'You are rude.'

;;

esac

Here is how the case statement works:

  1. You tell case which argument you are testing (more specifically, pattern matching) – we are testing the argument passed to the script in the first position ($1)
  2. case goes through the list of patterns (each pattern ends in a )) and if it finds a matching pattern, the code between the ) and the ;; is executed and then it skips to esac
  3. esac denotes the end of the case statement

The case statement does not evaluate any exit codes, it just matches patterns.

Here are some things to note: (Ward, 2014)⁠

  • multiple strings can be matched with | – if I put ‘Hi!’|’Hi’ I would match both “Hi!” and “Hi” and that line would look like 'Hi!'| ‘Hi’)
  • * matches a case which is unmatched by any other case

Hope you found this useful!

References

Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Pages 261-262