In this short article I would like to mention a few commands that enable you to monitor your computer resource usage. They are: (Ward, 2014)
I never used any of the above commands and so I am leaving it here for you to explore if you ever have the need. So far, I haven’t had the need.
A note: To understand the output of some of the commands above, you may need to understand how computer memory works in modern computers. In particular, you need to understand what is virtual memory and what are pages and what is a page fault. A Wikipedia article read on the topics mentioned in the previous sentence and further Googling on the topics that you don’t understand will be enough for you to gain a grasp of what is going on.
Thank you for reading!
Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Pages 183-188
0.11 tells me that in the last minute my CPU (processor) has been dealing with 0.11 processes, in the last 5 minutes it has been dealing with 0.14 processes and in the last 15 minutes it has been dealing with 0.18 processes.
If you have multiple cores (I have 4), then a load average of 1 means that one core has been busy all the time, while other 3 have been “chilling out”. The point is that when considering the load averages, factor in the number of cores in your computer and if you are in fact dealing with a high load average, use top to find out what process is using up most resources (it will be at the top of the top’s list).
Thank you for reading!
Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Pages 180-181
The question you may have is: “What is the difference between signals and interrupts?”
The difference is as follows: Interrupts are the communication between the CPU (Central Processing Unit – your processor) and the operating system (the kernel), and signals are the communication between processes and the operating system (the kernel). (“Signals and interrupts a comparison,” n.d.)
Let’s go into a bit more depth:
When an interrupt occurs (initiated by either hardware or software) it is actually managed by the CPU itself, which “interrupts” (pauses) the execution of the current process and tells the kernel to invoke the interrupt signal handler (which, to recap, is a program designed to handle interrupts). Signals, on the other hand, are used to communicate between processes. But, when the signal is traveling from the sending process to the receiving process, it is managed by the kernel, which invokes the action appropriate for the signal the process received.
I hope you gained some clarity on the difference between the two. This isn’t so that important and honestly I could have left out the part with the interrupts, but I just wanted for you to know about them since we were already talking about the operating system at such a low level. If you didn’t quite catch it, don’t worry – it won’t be that much of a hinderance.
This is for the curious souls out there. I hope there are some. Even if you are not as curious, you will benefit from reading this as it paints the bigger picture.
You may have pondered something along the lines of: “OK, there exist processes. OK, processes have priorities. But what is the connection between hardware and software? That is, how does the operating system know that we, for example, moved our mouse? Are we dealing with signals?” Not quite, but the concept is very similar.
Here is where the concept of hardware and software interrupts comes in. Basically, hardware and software interrupts tell the operating system (the kernel): “Hey, deal with me!”. For example, pressing a key on your keyboard triggers a hardware interrupt and your operating system processes it. Interrupts also have priorities, because multiple interrupts can occur at the same time and they need to be handled according to their urgency. It is also important to note that there are programs called interrupt handlers that are executed when an interrupt occurs.
An important caveat: This is not exactly how it works, but it paints the picture. In the next post, I will clarify the details, but they are minor and don’t impact your understanding that much.
Let’s talk about process priorities today. Why do processes even have priorities?
Let’s say that the world within your computer is like the real world. Let’s further imagine you are going about your day, doing your thing, driving your car, when all of a sudden you hear an ambulance. Uh-oh. You know you have to move yourself out of the ambulance’s way, because it has priority.
By the same token, processes in your operating system have priorities, depending on how important they are. In Linux, processes have priorities which range from -20 to 20, with -20 being the most important. Yes, you read that right: -20 is the highest priority. (Ward, 2014)
There is also something called a nice value, which is added to the process priority. By default, it is 0. This makes sense – as we learned in the previous paragraph, the higher the priority (in terms of its number), the lower it actually is. So if a processes nice factor is 10, then its real priority is whatever priority it has + 10. The higher the nice value, the more nice the process, since it effectively lowers its own priority. You will most likely never have to meddle with the nice level of a process.
Hope you learned something useful!
Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Pages 179-180
If you are a software developer, you sometimes want to know how much time does your program need to execute. Or, if you are a “regular” Linux user, maybe you want to know how much time a command takes to execute (albeit most likely not). Here is how to measure it: (Ward, 2014)
mislav@mislavovo-racunalo:~$ time ls
There are 3 relevant times: (“What do ‘real’, ‘user’ and ‘sys’ mean in the output of time(1)?,” n.d.)
real time – the amount of CPU time from starting the call to finishing the call of the process; this includes the time your process waited for some resource and the time the processor was executing other processes (your processor can switch to another process and execute a fraction of that process, then get back to the execution of your process)
user time – the amount of CPU time spent within the process
sys time – the amount of CPU time spent in the kernel within the process; if you wanted to do some stuff that only the kernel can do (remember that a regular user can’t do everything), you call the kernel function to do that and then this gets added up to sys time
The CPU time your process takes up is user + sys time.
Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Pages 178-179
Today, let’s go back down the good ol’ OS lane and talk about something called threads. A side note: I tried to fit in “memory lane”, but I couldn’t; it wouldn’t fit the content.
A thread is a sequence of commands that can be executed. A process always has at least one thread, but it can have multiple threads.
“Why have multiple threads?”, you may ask. A fair question. If you divide up the task into multiple threads (and assuming you have multiple processor cores), each thread can execute in its own core and the job can be done faster. An example: Say you were multiplying two matrices – each was of dimensions 20 000 x 20 000. You could do it in one thread, or maybe you could create 4 threads, each dealing with a particular part of the matrix. Or you could have 8 threads, or 80 threads – doesn’t matter. The point is, the more threads you have, the faster the program will execute, if you have the appropriate processor (one that has multiple cores) or multiple processors in your system (as in servers).
A thing I want to mention: Even if you have one processor core, you may have many apps open and they seem to be working flawlessly. In that case, what is actually happening is that you have an illusion of multitasking – your processor is just switching between programs very fast. You don’t even notice that switch, but again, it is an illusion of parallel execution. (Ward, 2014)
A thing I also want to mention: Dealing with threads is hard. Like, real hard. An example: Say you have two threads running concurrently (at the same time). Let’s call them thread A and thread B. Say both of them want to write in the same memory location. What can happen? Well, thread A can write its result first, then thread B can write its result and you have the result of the thread B. The reverse is also possible – thread B can write its result and that result can get overridden by thread A. Or, thread A can be writing its result, thread B starts writing its result in the middle of thread A writing its result… It is a mess. And again, to repeat, it is hard to manage threads. There are mechanisms to deal with thread management, but just know that dealing with threads is hard.
Hope you learned something useful!
Ward, B. (2014). How Linux Works: What Every Superuser Should Know (2nd ed.). No Starch Press. Page 5-6
Today, I will just throw a bug in your ear (this is a colloquialism in Croatia for “useful for you to know”) and tell you about two commands I have used – pgrep and pkill.
You most likely won’t be needing those commands in most cases, but sometimes you know only the process name, but don’t know its ID and you want to end the process. Then you can use pgrep in combination with xargs (see here: (“Can I chain pgrep with kill?,” n.d.)) or use pkill with the process name as the argument (“pkill(1) – Linux man page,” n.d.).