Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Tuesday, June 25, 2013

IPC device using Linux Kernel "completions"

The Linux kernel provides a service named "completions" for  the purpose of intimating another process that a particular task has been completed. Here is the source code for a device driver for a simple Inter Process Communication device that can be used to send a string of 4 characters between two processes.

Completions

Here is some details about the "completions":

  • Completions is a technique that can be used to convey that particular task has been completed to the other process
  • <linux/completion.h> must be included to use this.
  • A completion can be declared by DECLARE_COMPLETION(my_completion)
  • If it has to be created and initialized dynamically then we may use the following method.           
struct completion my_completion;
init_completion(&my_completion);
  • Waiting can be done by wait_for_completion(struct completion *c)
  • It performs an uninterruptible wait and if nobody ever issues a complete() , then that results in an un-killable process.
  • Completion can be notified by
    • void complete(struct completion *c)
    • void complete_all(struct completion *c)
    • Both of the above functions behaves differently if more than one process is waiting
    • In that case complete() will only notify only one process. But the complete_all() will notify all the processes waiting.
    • The structure can be reused without any problem unless complete_all is used. If comlpete_all is used then re-initialization has to be done in the follwing way :- INIT_COMPLETION(struct completion c)

The Device

The structure of device is simple. Here is the write operation
copy_from_user(buff,buffer,4);
complete(&sig);
The write operation simply copies the data from user space to an array in the kernel space.Then it issues a complete(). This is anything but a declaration that a particular task that is associated with sig has been completed and a process waiting on sig can proceed.
The read operaion is as follows.
wait_for_completion(&sig);
copy_to_user(buffer,buff,4);
The read operation on the other hand waits on the sig using wait_for_completion(). So when another process issues a complete() on sig, then the reader can proceed.
The device has the following behavior
  • Read and write can only be done with data size as 4 characters.
  • The data written to the device persists unless a read operation is done on the the device. The write operations is non-blocking.
  • The read operation on the device has to wait unless something is written to the device. Hence read operation is blocking.
Reference : LDD 3e Chapter 5

Saturday, May 11, 2013

HIGMEM and Memory Zones from newbie perspective


HIGHMEM

Earlier Linux kernels(about 10 years ago) where unable to support more than 1GB of  physical memory in 32 bit x86 systems. This was attributed to the way in which the VM was arranged. In earlier Linux 32bit systems with 3:1 split, the first 3GB of the VM was the user address space and the remaining 1GB was kernel address space. The 1GB allocated to the kernel can be mapped to  any part of physical memory. If the Kernel has to access any pages in physical memory then that has to be mapped to the its address space. Earlier kernel used static mapping and hence with 1GB address space, the kernel can only map 1GB of physical memory. Thus any physical memory beyond the size of 1GB can never be accessed by the kernel. Thus the earlier Linux kernels where restricted to using 1GB of physical memory in 32bit x86 systems.

The solution that was devised to solve this problem was high memory or simply HIGHMEM. In this strategy we divide the physical memory into high memory and low memory. The low memory still uses the permanent mapping. That is certain pages in the physical memory is always mapped to the kernel address space and that is called the low memory. The remaining part of kernel address space makes use of temporary mappings and that is called the high memory. That is the pages are mapped as when required. This enables the kernel to access any pages within 4GB range. The kernel data structures like linked lists must live in the low memory itself. This helps in maintaining pointer consistency. 



MEMORY ZONES

In Linux kernel the page frames are grouped logically in to three.They are
1.ZONE_DMA
2.ZONE_NORMAL
3.ZONE_HIGHMEM
There are some reasons in the Linux kernel to divide the physical memory as explained next.In some architectures there is a constraint on which a range of memory can be used. An example can be given in x86 architecture. In x86 architecture , the ISA devices are only capable of addressing the first 16MB of RAM. It implies that the for DMA operations, the ISA devices can only use the first 16MB of RAM. Some architectures does not have this constraint(Example :- PPC ). Besides this we have to accommodate the HIGHMEM solutions. With the division of memory, managing the memory becomes easy with each zone having a struct to store its details.
ZONE_DMA:
Above I mentioned that ISA devices in x86 has a constraint in memory addressing. So in x86 architecture ZONE_DMA is the group of pages that belongs to first 16MB of RAM. Since PPC does not have this constraint and hence in PPC ZONE_DMA is empty.

ZONE_NORMAL:
In x86 architecture , the first 896MB of page frames are permanently mapped to the Kernel Address space. Which can be other wise stated as , the first 896MB of Kernel address space is mapped to the first 896MB of physical memory.So that leaves 128MB of unmapped addresses in the 1GB kernel address space. This unmapped addresses can be used to make the temporary mappings from the high memory. The union of ZONE_DMA pages and ZONE_NORMAL pages gives us the low memory pages. If ZONE_NORMAL pages are not available for allocation, as a last resort, kernel can use ZONE_DMA but not vice versa.


ZONE_HIGHMEM:
ZONE_HIGHMEM is the collection of high memory pages. So in 32bit x86 that will be any memory above 896MB. They are not permanently mapped to the kernel address space as explained in HIGHMEM section.

Sources :
[1] Linux Device Drivers,3rd ed.,Corbet et al
[2]Linux Kernel Development by Robert Love 
[3] http://lwn.net/Articles/75174/

Wednesday, July 18, 2012

Need for Shell Built ins

Usually a command is executed by the shell by spawning a new process. Though this strategy is effective in dealing usual commands it may turn ineffective in dealing certain others, especially one that changes the behavior of the shell itself.Example for such a command is the cd command. As you know, cd is a command used to change the current working directory in a shell.


Consider that we decided to adopt the conventional method for implementing the cd command,i.e we will spawn a new process whenever the cd command is encountered. If you where to explore Linux System Call API, you will find that the only sys call available for changing the directory is the chdir(). The problem with the chdir() is that it can only change the current working directory of the current process. Thus if we are implementing cd as a separate process, it will have no effect on our main process-shell.That's the stage at which the shell built ins kicks in. Built ins are nothing but commands whose implementation is the task of the shell itself. So whenever a built in command is encountered , it is implemented by the bash itself rather than as a separate process. 


The below program is the source code for a minimal shell that implements cd as a built in
Note: The minimal shell given above accepts maximum of only one argument for any given command including switches. Also the commands supported are the ones only in /bin directory along with exit and cd

Sunday, January 22, 2012

Command line tool for Internet Usage Monitoring in Linux

Yeah, a command line tool for monitoring the internet usage so that I can avoid over usage of the internet. This was something that I have been trying to develop for quite sometime and the breakthrough happened just yesterday.  Initially I was trying to develop it using  shell scripts. The whole script was centered at the ifconfig command. When I tried installing it to the crontab, it won't work for some reason which I don't know yet. Then I turned to Python. Here too the script was centered on the ifconfig. But to my disappointment that too didn't work when installed to crontab. it was at this point that I thought about a different strategy. In order to get the details of Internet usage in current session I decided to rely on the file, the ifconfig command is relying rather than the relying on ifconfig command itself.
                       So the next challenge was finding out the file ifconfig is relying. Then I decided to analyse the system calls used by ifconfig . Here I took the help of strace command and arrived at the conclusion /proc/net/dev is the file that is used by the ifconfig for the details of internet usage in current session. And with this file things become lot easier.
              As of now this application will monitor the usage for one month. Then it is reset to zero automatically. You can change the day of the month in which the monitoring begins. The application  also has a feature wherein  you can set free usage time, so that the internet usage is not monitored between the specified time.It can only monitor the internet communication taking place through eth0 (wired) eth1 (wireless) interfaces. 


            You may track the development here You can install it by running Installer.sh script (Don't change the file names and all files should be in the same directory as of Installer.sh ). After installing you can use the tool by using the command Eusage along with the switches.
Note : Scripts won't work without running Installer.sh. Don't try to run Eusage without the switches
Eusage -u :- used to give the details of usage so far, from the beginning of the month 
Eusage -r :- To clear the memory (i.e. everything        will be set to zero)
Eusage -h :- for help
Eusage -c :- for changing free usage time and to set the day of month to begin the monitoring



Saturday, December 17, 2011

Twitter for System Administration



How about shutting down/restarting your system via SMS? 
Is that something that is possible? Yes , With twitter SMS service that is of course possible. This can come handy for system administrators. Here I will show you how to control your system  on the move via SMS, using the services provided by the twitter.
          Initially we will create two twitter accounts , call them Master and Slave. Master account will be a normal account, on the other hand Slave will be a private account, which has connection only with Master(i.e Slave will be only following the Master and the only follower that Slave has, will be Master). This helps preventing users other than Master sending messages to the Slave.
           The next step will be a program that monitors the message inbox of the Slave for the messages from the Master. This program have to be run on the system that has to be controlled. The program should be able to log in to the account of Slave and to check out its inbox. That's where twitter API comes to rescue. For various reasons twitter prevents earlier Basic authentication. So we have to rely on OAuth . OAuth is a trickier authentication method , whose explanation is beyond the scope of this article. Using twitter api resources we will be checking out the message inbox of Slave. Remember Master is the only user in twitter who can send messages to the Slave . Also message to a twitter user can be sent via SMS .Messages are monitored by the program and depending on the message, necessary actions are taken. Below is simple program that will Shut down/restart the system depending on the message. 


Though the sample program can only be used for Shutting down/restarting the system ,System Administrators can add the functionality like auto replying to the queries issued by the Master, thus achieving a greater control over the system via SMS.
Note : Python tweepy module is a module that helps in communication with twitter. It can be installed in linux system by the command easy_install tweepy. Also the program is installed in crontab ,so that it runs every minute.

Thursday, December 01, 2011

The Notifier

Have you ever been in a situation where you were forced to check out a web page again and again to see whether a particular information appeared there or not? Well, I have been in such a situation where I wanted to see whether my university results were released or not. Checking out the University site every time using a browser was an option. But I decided to automate the process. That's how the following script was born.










After writing this script , I have to automate the running of this script. This was achieved using crontab . I scheduled it in a way that it gets executed every minute. So when the in formation appears in the web page I get notified automatically.
Note : By changing variables URL,Lookfor and Displaymessage any one can customize it for their own use.
Steps to setting up this in a Linux system
Step1 : Save the script with executable permissions. This can be achieved by the command chmod 777 filename
Step 2 : Type in the command crontab -e .Now a file gets opened up ,to which you need to add the following  * * * * *  absolute_path_to_the_script

Thursday, October 06, 2011

Convert Images to a single PDF

Converting images to PDF on the command line is a piece of cake on Ubuntu. Program 'convert' can be used to convert a bunch of images to a single PDF file. Program convert can be installed by installing the package imagemagick. This in turn ,can be done simply by issuing the command :
                                                      sudo apt-get install imagemagick

Suppose you have a number of images that has to be converted in to  a single PDF, then :
Step 1 :- Create a new directory and copy all the images to a single directory
Step 2 :- Go to the command line and change directory to the new directory
Step 3 :- Issue the command convert * example.pdf . This will convert all the files in the current directory to a single PDF file.

Converting PDF files to text files  is also easy. Command pdftotext can be used to convert PDF files to text files. Thus the  command pdftotext example.pdf example.txt  will create text version of example.pdf. In most cases pdftotext is pre installed on an Ubuntu OS.

Tuesday, June 28, 2011

Automation in Linux 1.0

Automation is an important ingredient in any robust software. Linux OS provides you with the tools that  can be utilized to automate execution of scripts. Here we will discuss mainly two methods for the auto execution of scripts namely,init.d method and .bashrc method.


init.d method
               Consider that you are required to run a script automatically at the start-up itself. Then you can go for init.d method. init.d is actually a directory  in /etc folder. I will illustrate how to auto run scripts at the start-up through following steps
step1  Put the script you intend to run in to the folder /etc/init.d. You need to be root for doing this. 
Step2 Make the script executable by giving chmod +x Script_Name
Step3 Then issue command update-rc.d Script_Name defaults 99


Step3 requires further explanation. update-rc.d helps in the updation of script links. I will quote the man page here
update-rc.d updates the System V style init script links /etc/rcrunlevel.d/NNScript_Name whose target  is  the  script /etc/init.d/Script_name.   These links are run by init when it changes runlevels; they are generally used to start and stop system services such as daemons.  runlevel is one of the runlevels supported by init, namely, 0123456789S,and NN is the two-digit sequence number that determines where in the sequence init will run the scripts.
When we give defaults with update-rc.d the script will have links at every runlevel. You can see the links in the folders /etc/rc#.d , where # stands for the numbers 0 to 6 which denotes the runlevels. Care should be taken to avoid interactive programs being automated through this method


Tips :- You can make use of a single start-up script that is made to run at start-up by above methods to run
any number of other scripts. For example , in my system I've establised a startup script named My_startup. So when I need new scripts that run automatically on the start-up , all I've to do is to add the particular script to the My_starup script. So it becomes really easy to have new start-up scripts.The My_startup in my system will look something like below :-
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/user/.Myscripts
Startup_script#1 &
Startup_script#2 &
.
.
.Startup_script#n &
Note :  PATH and SHELL variables should be defined in the script. Also & at the end of each line helps in launching the next script without waiting for the previous one to finish        
.bashrc method
                       .bashrc is nothing but a shell script that is run each time you launch the terminal. You can find .bashrc file in your home folder (/home/user/) itself. You can add the scripts/commands that you want to run at the launch of terminal to .bashrc file.In my system I've used it to set variables.


The other tool that can be used for running scripts automatically is the crontab utility. It is so powerful so that we can schedule the script to run at a particular time like a day of the month, every minute,every hour,yearly,monthly.... We will have a detailed discussion on it later.

Tuesday, April 19, 2011

Changing system variables permanently

I wanted to change the the system variable $PATH in a such a way that bash executes scripts in current working directory. I can do this simply by PATH="$PATH:.". While working on this I came to know about following things....

You can change the value of system variables simply by doing sys_var_name="new_value". But such a kind of initialization is only temporary. Once you close the terminal the system variables you changed is reset to old values. So we have to initialize the system variables for our convenience every time we launch the terminal. Doing it manually is really boring. So is there any way to do it automatically?
The answer is a big ΥΕS. '.bashrc' is a shell script that's executed every time at the  launch of terminal. '.bashrc' is kept in the users home folder. You can edit .bashrc script in a such a way that it edits system variable automatically for your convenience every time you launch the terminal. Thus you can change the system variables permanently. You can even run shell scripts of your choice at the launch of terminal automatically by editing .bashrc :-)