Creating a child process in perl

Thursday, January 31, 2013 , , 0 Comments

This example fork 10 child processes.
It will wait for all child's to finish before exiting.


#!/usr/local/bin/perl

use strict;
use warnings;

print "Starting main program\n";
my @childs;

for ( my $count = 1; $count <= 10; $count++) {
        my $pid = fork();
        if ($pid) {
        # parent
        #print "pid is $pid, parent $$\n";
        push(@childs, $pid);
        } elsif ($pid == 0) {
                # child
                sub1($count);
                exit 0;
        } else {
                die "couldnt fork: $!\n";
        }
}
foreach (@childs) {
        my $tmp = waitpid($_, 0);
         print "done with pid $tmp\n";

}
print "End of main program\n";

sub sub1 {
        my $num = shift;
        print "started child process for  $num\n";
        sleep $num;
        print "done with child process for $num\n";
        return $num;
}


Output looks like:

Starting main program
started child process for  1
started child process for  2
started child process for  3
started child process for  4
started child process for  5
started child process for  6
started child process for  9
started child process for  10
started child process for  7
started child process for  8
done with child process for 1
done with pid 5584
done with child process for 2
done with pid 5585
done with child process for 3
done with pid 5586
done with child process for 4
done with pid 5587
done with child process for 5
done with pid 5588
done with child process for 6
done with pid 5589
done with child process for 7
done with pid 5590
done with child process for 8
done with pid 5591
done with child process for 9
done with pid 5593
done with child process for 10
done with pid 5594
End of main program

0 comments:

Explicit in c++

Thursday, January 31, 2013 , 3 Comments

Many a times we wonder why this explicit keyword is used when ever we came across any c++ code. Here I would like to give a simple explanation of this keyword which can be more convincing to all.
Suppose you have a class String
class String {
public: 
String (int n);//allocate n bytes to the String object 
String(const char *p); // initializes object with char *p 
}
Now if you try
String mystring='x';
The char 'x' will be converted to int and will call String(int) constructor. But this is not what the user might have intended. So to prevent such conditions, we can define the class's constructor as explicit.
class String {
public: 
explicit String (int n); //allocate n bytes
String(const char *p); // initialize sobject with string p 
}

3 comments:

Creating a hash in perl with all the regex matches

Thursday, January 31, 2013 , 0 Comments

As i explained here :http://theunixshell.blogspot.com/2013/01/capturing-all-regex-matches-into-array.html about storing each and each regex match in an array,Now let's see how do we store each and every regex match in a hash instead.

Lets say we have text file like below:

hello world 10 20
world 10 10 10 10 hello 20
hello 30 20 10 world 10


I want to store each and every decimal integer here in a hash as a key and count of their occurances in a value for that key.

Below is the solution for that:

perl -lne '$a{$_}++for(/(\d+)/g);END{for(keys %a){print "$_.$a{$_}"}}' Your_file

output generated will be:

> perl -lne '$a{$_}++for(/(\d+)/g);END{for(keys %a){print "$_.$a{$_}"}}' temp
30.1
10.7
20.3






0 comments:

Delete empty lines in a file

Tuesday, January 29, 2013 , , 1 Comments

Some times there are some empty lines which we feel are redundant in the file and want to remove them.Below is the command in unix to do that.
sed -i '/^$/d' your_file
But there is also another way to do this:
grep . your_file > dest_file
In perl also we can acheive this as below:
perl -pi -e 's/^$//g' your_file
the above mentioned perl and sed solutions will do an inplace replacement in the file
If in case the lines have some spaces then:
perl -pi -e 's/^\s*$//g' your_file

1 comments:

Creating files using Awk

Thursday, January 24, 2013 , 0 Comments

I have 40,000 data files. Each file contains 1445 lines in single column. Now I need to rearrange the data in different order. The first number from each data file need to be collected and dumped in a new file (lets say abc1.dat). This particular file (abc1.dat) will contain 40,000 numbers.
And the second number from each data file need to be extracted and dumped in a another new file (let's say abc2.dat). This new file also will be containing 40,000 numbers. But only second numbers from each data file. At the end of this operation I should have 1445 files (abc1.dat, abc2.dat,...abc40000.dat) and each contains 40,000 data.

Below is a simple way to do it in awk:
awk '{print $1>"abc"FNR".txt"}' file*
file* above should refer to all 40k files.
Some flavors of unix does not support the above syntax,in such case go for:
awk '{x="abc"FNR".txt";print $1>x}' file*

0 comments:

List of columns in a table from SYBASE

Monday, January 21, 2013 , 0 Comments

I recently came across a need to create a file which has the columns of a table in sybase.Then I thought it would be better that we write a script which will fetch the column details by taking table name as an argument and store the output in a text file.Below is a simple perl script to do it.

As you can see help text is also there.So i guess i don't need to explain it.

#!/usr/bin/perl
######################################
#This is the script to list all the
#columns in the table.table names
#should be given as arguments to the
#script.
#example:
#script_name table_name_1 table_name_2 ...so on
######################################

use strict;
use warnings;
my $result;
unlink("output.txt");
foreach(@ARGV)
{
$result = qx{isql -U<user_name> -P<password> -D<dbname> <<EOF
set nocount on
SELECT sc.name FROM syscolumns sc INNER JOIN sysobjects so ON sc.id = so.id WHERE so.name = '$_'
go
exit
EOF
};
my @lines = split /\s+\n/, $result;
splice @lines,0,2;
$_=~s/ //g foreach @lines;

my $outfile  = "output.txt";
open (OUTFILE, ">>$outfile") || die "ERROR: opening $outfile\n";
print OUTFILE "$_\n------------\n".join "\n",@lines;
print OUTFILE "\n\n";
}
close OUTFILE;
################################################################################ 

0 comments:

Executing shell command in c/c++ and parsing the output

Thursday, January 17, 2013 , , , 0 Comments

Often we need to execute some shell command and get the output of that shell command in c.
system will simply execute the shell command but ther is no way using it for fetching its output.
below is the way where we can execute the command and also get its output into a c/c++ buffer.
FILE* pipe = popen("your shell command here", "r");
if (pipe)
{
char buffer[128];
while(!feof(pipe))
{
if(fgets(buffer, 128, pipe) != NULL){}
}
pclose(pipe);
buffer[strlen(buffer)-1] = '\0';
}

0 comments:

Search a string in multiple files recursively

Thursday, January 17, 2013 , , 2 Comments

Almost every unix programmer will need this at least once in a day.
For searching a string abc in all the files in the current direcory we use:
grep 'abc' *

If you want to serach in files which are under any sub directories also including the files in the current directory then we have to combine both find and grep:

find . -type f|xargs grep 'abc'

Another possible way is using exec of find command:
find . -type f -exec grep 'abc' {} \;

2 comments:

Capturing all the regex matches into an array in perl

Wednesday, January 16, 2013 , , 0 Comments

Often we need to track the list of matches whenever we use a regex.
for example lets say i have a file which looks like below:

Jan 1 1982
Dec 20 1983
jan 6 1984

Now if i want to store all the decimal number in the file in a array..How?

first thing is we need to use a regex match and store all the numbers into an array.

Below is the code for it :

perl -lne 'push @a,/\d+/g;END{print "@a"}' your_file

The above will output:

> perl -lne 'push @a,/\d+/g;END{print "@a"}' your_file
1 1982 20 1983 6 1984


In the same way you can use any other pattern in the match regex and capture those patterns in an array.

0 comments:

Perl's equivalent of "grep -f"

Thursday, January 10, 2013 , , 0 Comments

Lets say there are two files:

> cat file1
693661
11002
10995
48981
79600
> cat file2
10993   item    0
11002   item    6
10995   item    7
79600   item    7
439481  item    5
272557  item    7
224325  item    7
84156   item    6
572546  item    7
693661  item    7

using
grep -f file1 file2
we can filter the data in file2 but comparing the matching rows only with teh first column.

I have written a perl equivalent to this but yet not in a complete way.
the below command will simply compare the first field of the file 1 and present the output if it matches with the first field of file2.

> perl -F -lane 'BEGIN{$c=1}if($c==1){$x{$_}++};if(eof && $c==1){$c++;next};if($c==2){if($x{$F[0]}){print }}' file1 file2
11002   item    6
10995   item    7
79600   item    7
693661  item    7

0 comments:

Difference between Java and c++

Thursday, January 10, 2013 , 0 Comments

  1. C++ supports pointers whereas Java does not support pointers. But when many programmers questioned how you can work without pointers, the promoters began saying "Restricted pointers.” So we can say java supports Restricted pointers.
  2. At compilation time Java Source code converts into byte code .The interpreter execute this byte code at run time and gives output .Java is interpreted for the most part and hence platform independent. C++ run and compile using compiler which converts source code into machine level languages so c++ is plate from dependents
  3. Java is platform independent language but c++ is depends upon operating system machine etc. C++ source can be platform independent (and can work on a lot more, especially embedeed, platforms), although the generated objects are generally platofrom dependent but there is clang for llvm which doesn't have this restriction.
  4. Java uses compiler and interpreter both and in c++ their is only compiler
  5. C++ supports operator overloading multiple inheritance but java does not.
  6. C++ is more nearer to hardware then Java
  7. Everything (except fundamental types) is an object in Java (Single root hierarchy as everything gets derived from java.lang.Object).
  8. Java does is a similar to C++ but not have all the complicated aspects of C++ (ex: Pointers, templates, unions, operator overloading, structures etc..) Java does not support conditional compile (#ifdef/#ifndef type).
  9. Thread support is built-in Java but not in C++. C++11, the most recent iteration of the C++ programming language does have Thread support though.
  10. Internet support is built-in Java but not in C++. However c++ has support for socket programming which can be used.
  11. Java does not support header file, include library files just like C++ .Java use import to include different Classes and methods.
  12. Java does not support default arguments like C++.
  13. There is no scope resolution operator :: in Java. It has . using which we can qualify classes with the namespace they came from.
  14. There is no goto statement in Java.
  15. Exception and Auto Garbage Collector handling in Java is different because there are no destructors into Java.
  16. Java has method overloading, but no operator overloading just like c++.
  17. The String class does use the + and += operators to concatenate strings and String expressions use automatic type conversion,
  18. Java is pass-by-value.
  19. Java does not support unsigned integer.

0 comments:

Bulk rename of files in unix

Wednesday, January 09, 2013 , , , 0 Comments

Most of the time it is required that w eneed to rename the files in bulk.
this can be done in many ways,Mostly people do it by  writing a simple shell script.

I found a better way to do it using just the command line in a single command.
Let's say we have some files as shown below.Now i want remove the part -(ab...) from those files.
> ls -1 foo*
foo-bar-(ab-4529111094).txt
foo-bar-foo-bar-(ab-189534).txt
foo-bar-foo-bar-bar-(ab-24937932201).txt
So the expected file names would be :
> ls -1 foo*
foo-bar-foo-bar-bar.txt
foo-bar-foo-bar.txt
foo-bar.txt
Below is a simple way to do it.
> ls -1 | nawk '/foo-bar-/{old=$0;gsub(/-\(.*\)/,"",$0);system("mv \""old"\" "$0)}'
Explanation:

ls -1
Will list all the files in a single column in a directory

nawk '/foo-bar-/ 
The processing is done only for a file names with has foo-bar- as a part of their names.

old=$0 
  initially storing the file name in a variable.
gsub(/-\(.*\)/,"",$0) 
 Removing the undesired part of the file name.

mv \""old"\" "$0 
This will be expanded to :mv "foo-bar-(ab-4529111094).txt" foo-bar-foo-bar-bar.txt.You might ask why a '"'.because there is possibility that the fine name might consist a space or any other special character(in our case it has '(' ).

system("mv \""old"\" "$0) 
This will execute on teh command line what ever is there inside the system call.

Note: nawk is specific for solaris unix.for other falvours of unix just awk is enough.

0 comments:

Shebang line in shell script

Tuesday, January 08, 2013 0 Comments

I have simple perl script as below:

#!/usr/bin/perl

use strict;
use warnings;

print "hello ! world\n";

I can execute this script as below:
>temp.pl
hello ! world
>

if i add some comments like this:
 
#this script is just for test
#the shebang
#!/usr/bin/perl
use strict;
use warnings;
print "hello ! world\n";
 
and when i try to execute,it gives me output as below:

> temp.pl
use: Command not found.
use: Command not found.
print: Command not found.
> 
 
The point here is shebang line should be always at the top no matter what. but can anybody explain me why?

What wiki says?

In computing, a shebang (also called a sha-bang,hashbang,pound-bang,hash-exclam,or hash-pling) is the character sequence consisting of the characters number sign and exclamation mark (that is, "#!") when it occurs as the initial two characters on the initial line of a script.

Under Unix-like operating systems, when a script with a shebang is run as a program, the program loader parses the rest of the script's initial line as an interpreter directive; the specified interpreter program is run instead, passing to it as an argument the path that was initially used when attempting to run the script.[8] For example, if a script is named with the path "path/to/script", and it starts with the following line:

Why should it always be the first line?

The shebang must be the first line because it is interpreted by the kernel, which looks at the two bytes at the start of an executable file. If these are #! the rest of the line is interpreted as the executable to run and with the script file available to that program on stdin. (Details vary slightly, but that is the picture).

Since the kernel will only look at the first two characters and has no notion of further lines, you must place the hash bang in line 1.

Now what happens if the kernel can't execute a file beginning with #whatever? The shell, attempting to fork an executable and being informed by the kernel that it can't execute the program, as a last resort attempts to interpret the file contents as a shell script. Since the shell is not perl, you get a bunch of errors, exactly the same as if you attempted to run

0 comments:

Indentation in vi editor

Monday, January 07, 2013 , 1 Comments

In general many of us might get irritated by working on vi editor since we are more used to MSword/any windows editor.Below i present some very useful tips for indentation purposes in vi editor.

In the commands below, "re-indent" means "indent lines according to your indentation settings." shiftwidth is the primary variable that controls indentation.

General Commands
>>   Indent line by shiftwidth spaces
<<   De-indent line by shiftwidth spaces
5>>  Indent 5 lines
5==  Re-indent 5 lines

>%   Increase indent of a braced or bracketed block (place cursor on brace first)
=%   Reindent a braced or bracketed block (cursor on brace)
<%   Decrease indent of a braced or bracketed block (cursor on brace)
]p   Paste text, aligning indentation with surroundings

=i{  Re-indent the 'inner block', i.e. the contents of the block
=a{  Re-indent 'a block', i.e. block and containing braces
=2a{ Re-indent '2 blocks', i.e. this block and containing block

>i{  Increase inner block indent
<i{  Decrease inner block indent

You can replace { with } or B, e.g. =iB is a valid block indent command. Take a look at "Indent a Code Block" for a nice example to try these commands out on.

Also, remember that
.    Repeat last command
, so indentation commands can be easily and conveniently repeated.

Re-indenting complete files
Another common situation is requiring indentation to be fixed throughout a source file:
gg=G  Re-indent entire buffer
You can extend this idea to multiple files:
" Re-indent all your c source code:
:args *.c
:argdo normal gg=G
:wall
Or multiple buffers:
" Re-indent all open buffers:
:bufdo normal gg=G:wall

In Visual Mode
Vjj> Visually mark and then indent 3 lines

In insert mode
These commands apply to the current line:
CTRL-T   insert indent at start of line
CTRL-D   remove indent at start of line
0 CTRL-D remove all indentation from line

Ex commands
These are useful when you want to indent a specific range of lines, without moving your cursor.
:< and :> Given a range, apply indentation e.g.
:4,8>   indent lines 4 to 8, inclusive

Indenting using markers
Another approach is via markers:
ma     Mark top of block to indent as marker 'a'
...move cursor to end location
>'a    Indent from marker 'a' to current location

Variables that govern indentation
You can set these in your .vimrc file.
set expandtab       "Use softtabstop spaces instead of tab characters for indentation
set shiftwidth=4    "Indent by 4 spaces when using >>, <<, == etc.
set softtabstop=4   "Indent by 4 spaces when pressing <TAB>

set autoindent      "Keep indentation from previous line
set smartindent     "Automatically inserts indentation in some cases
set cindent         "Like smartindent, but stricter and more customisable
Vim has intelligent indentation based on filetype. Try adding this to your .vimrc:
if has ("autocmd")
    " File type detection. Indent based on filetype. Recommended.
    filetype plugin indent on
endif

1 comments:

Splitting a string in C++

Monday, January 07, 2013 , , , 2 Comments

Once I writing a tool for processing some  csv files where in I needed to split strings and store them in a vector.I wrote one function for it which i feel like sharing.below is the function to split the string:
std::vector<std::string> &split(const std::string &s, char delim, std::vector<std::string> &elems) {
std::stringstream ss(s);
std::string item;
while(std::getline(ss, item, delim)) {
elems.push_back(item);
}
return elems;
}


std::vector<std::string> split(const std::string &s, char delim) {
std::vector<std::string> elems;
return split(s, delim, elems);
}

The given function splits the string and returns a vector of strings.Also i would like to mention that if the string contains empty fields then an empty element  is inserted into the vector.

More simple way would be:
#include <sstream>//for std::istringstream
#include <iterator> //for std::istream_iterator
#include <vector>//for std::vector
while(std::getline(in, line))
{
std::istringstream ss(line);
std::istream_iterator<std::string> begin(ss), end;//putting all the tokens in the vector std::vector<std::string> arrayTokens(begin, end);//arrayTokens is containing all the tokens - use it!
}

2 comments:

The Definitive C++ Book Guide and List

Monday, January 07, 2013 , 0 Comments

This question attempts to collect the few pearls among the dozens of bad C++ books that are released every year.Unlike many other programming languages, which are often picked up on the go from tutorials found on the Internet, few are able to quickly pick up C++ without studying a good C++ book. It is way too big and complex for doing this. In fact, it is so big and complex, that there are many bad C++ books out there. And we are not talking about bad style, but things like sporting glaringly obvious factual errors and promoting abysmally bad programming styles. And it's even worse with online tutorials. (There is a reason nobody bothered to setup a similar question for online tutorials.)
Please provide quality books and an approximate skill level. Add a short blurb/description about each book that you have personally read/benefited from. Feel free to debate quality, headings, etc. Books that meet the criteria will be added to the list. Books that have reviews by the Association of C and C++ Users (ACCU) have links to the review.

Reference Style - All Levels

  1. The C++ Programming Language (Bjarne Stroustrup) (soon to be updated for C++11) The classic introduction to C++ by its creator. Written to parallel the classic K&R, this indeed reads very much alike it and covers just about everything from the core language to the standard library, to programming paradigms to the language's philosophy. (Thereby making the latest editions break the 1k page barrier.) [Review]
  2. C++ Standard Library Tutorial and Reference (Nicolai Josuttis) (updated for C++11) The introduction and reference for the C++ Standard Library. The second edition (released on April 9, 2012) covers C++11. [Review]
  3. The C++ IO Streams and Locales (Angelika Langer and Klaus Kreft) There's very little to say about this book except that, if you want to know anything about streams and locales, then this is the one place to find definitive answers. [Review]
C++ 11 References:
  1. The C++ Standard (INCITS/ISO/IEC 14882-2011) This, of course, is the final arbiter of all that is or isn't C++. Be aware, however, that it is intended purely as a reference for experienced users willing to devote considerable time and effort to its understanding. As usual, the first release was quite expensive ($300+ US), but it has now been released in electronic form for $30US -- probably the least expensive of the reference books listed here.
  2. Overview of the New C++ (C++11) By Scott Meyers, who's a highly respected author on C++. Even though the list of items is short, the quality is high.

Beginner

Introductory

If you are new to programming or if you have experience in other languages and are new to C++, these books are highly recommended.
  1. C++ Primer† (Stanley Lippman, Josée Lajoie, and Barbara E. Moo) (updated for C++11) Coming at 1k pages, this is a very thorough introduction into C++ that covers just about everything in the language in a very accessible format and in great detail. The fifth edition (released August 16, 2012) covers C++11. [Review]
  2. Accelerated C++ (Andrew Koenig and Barbara Moo) This basically covers the same ground as the C++ Primer, but does so on a fourth of its space. This is largely because it does not attempt to be an introduction to programming, but an introduction to C++ for people who've previously programmed in some other language. It has a steeper learning curve, but, for those who can cope with this, it is a very compact introduction into the language. (Historically, it broke new ground by being the first beginner's book using a modern approach at teaching the language.) [Review]
  3. Thinking in C++ (Bruce Eckel) Two volumes; second is more about standard library, but still very good
  4. Programming: Principles and Practice Using C++ (Bjarne Stroustrup) An introduction to programming using C++ by the creator of the language. A good read, that assumes no previous programming experience, but is not only for beginners.
† Not to be confused with C++ Primer Plus (Stephen Prata), with a significantly less favorable review.

Best practices

  1. Effective C++ (Scott Meyers) This was written with the aim of being the best second book C++ programmers should read, and it succeeded. Earlier editions were aimed at programmers coming from C, the third edition changes this and targets programmers coming from languages like Java. It presents ~50 easy-to-remember rules of thumb along with their rationale in a very accessible (and enjoyable) style. [Review]
  2. Effective STL (Scott Meyers) This aims to do the same to the part of the standard library coming from the STL what Effective C++ did to the language as a whole: It presents rules of thumb along with their rationale. [Review]

Intermediate

  1. More Effective C++ (Scott Meyers) Even more rules of thumb than Effective C++. Not as important as the ones in the first book, but still good to know.
  2. Exceptional C++ (Herb Sutter) Presented as a set of puzzles, this has one of the best and thorough discussions of the proper resource management and exception safety in C++ through Resource Acquisition is Initialization (RAII) in addition to in-depth coverage of a variety of other topics including the pimpl idiom, name lookup, good class design, and the C++ memory model. [Review]
  3. More Exceptional C++ (Herb Sutter) Covers additional exception safety topics not covered in Exceptional C++, in addition to discussion of effective object oriented programming in C++ and correct use of the STL. [Review]
  4. Exceptional C++ Style (Herb Sutter) Discusses generic programming, optimization, and resource management; this book also has an excellent exposition of how to write modular code in C++ by using nonmember functions and the single responsibility principle. [Review]
  5. C++ Coding Standards (Herb Sutter and Andrei Alexandrescu) "Coding standards" here doesn't mean "how many spaces should I indent my code?" This book contains 101 best practices, idioms, and common pitfalls that can help you to write correct, understandable, and efficient C++ code. [Review]
  6. C++ Templates: The Complete Guide (David Vandevoorde and Nicolai M. Josuttis) This is the book about C++ templates. It covers everything from the very basics to some of the most advanced template metaprogramming and explains every detail of how templates work (both conceptually and at how they are implemented) and discusses many common pitfalls. Has excellent summaries of the One Definition Rule (ODR) and overload resolution in the appendices. [Review]

Above Intermediate

  1. Modern C++ Design (Andrei Alexandrescu) A groundbreaking book on advanced generic programming techniques. Introduces policy-based design, type lists, and fundamental generic programming idioms then explains how many useful design patterns (including small object allocators, functors, factories, visitors, and multimethods) can be implemented efficiently, modularly, and cleanly using generic programming. [Review]
  2. C++ Template Metaprogramming (David Abrahams and Aleksey Gurtovoy)
  3. C++ Concurrency In Action (Anthony Williams) A book covering C++11 concurrency support including the thread library, the atomics library, the C++ memory model, locks and mutexes, as well as issues of designing and debugging multithreaded applications.

Classics / Older

Note: Some information contained within these books may not be up to date or no longer considered best practice.
  1. The Design and Evolution of C++ (Bjarne Stroustrup) If you want to know why the language is the way it is, this book is where you find answers. This covers everything before the standardization of C++.
  2. Ruminations on C++ - (Andrew Koenig and Barbara Moo) [Review]
  3. Advanced C++ Programming Styles and Idioms (James Coplien) A predecessor of the pattern movement, it describes many C++-specific "idioms". It's certainly a very good book and still worth a read if you can spare the time, but quite old and not up-to-date with current C++.
  4. Large Scale C++ Software Design (John Lakos) Lakos explains techniques to manage very big C++ software projects. Certainly a good read, if it only was up to date. It was written long before C++98, and misses on many features (e.g. namespaces) important for large scale projects. If you need to work in a big C++ software project, you might want to read it, although you need to take more than a grain of salt with it. There's been the rumor that Lakos is writing an up-to-date edition of the book for years.
  5. Inside the C++ Object Model (Stanley Lippman) If you want to know how virtual member functions are commonly implemented and how base objects are commonly laid out in memory in a multi-inheritance scenario, and how all this affects performance, this is where you will find thorough discussions of such topics.

0 comments:

Modify file names containing spaces using perl

Friday, January 04, 2013 1 Comments

Many a times we see some configuration files which have list of paths like:

/abc/123_1/1tvd/fiel.txt
/abc/123 r/1tvd/fie2l.txt
/abc/123 a/1tvd/fie3l.txt
/abc xyz/123/1tvd/fie4l.txt
and some times the spaces create a lot of problems while we use those configuration files.
in such cases we need to make those directory names as
/"abc xyz"/123/1tvd/fie4l.txt
But how do we do it across the complete file?

Below is the perl way to do it:
perl -F"/" -ane 'foreach (@F)
                 {if(/ /){$_="\"".$_."\"";}}
                 print join "/",@F;' config_file
But yes there can be other ways too like in awk.

1 comments:

Passing a bash variable to awk in a shell script

Thursday, January 03, 2013 , , , 0 Comments

Many a times there is a need for passing shell variables inside a shell script to awk.

below is the way we have to pass it and the execution is done on solaris unix:

> setenv X "hello"
> echo $X | nawk -v str="${X}" '{print str}'
hello
>

 

0 comments:

Caplitalize(change to uppercase )first letter of each word in a line

Thursday, January 03, 2013 0 Comments

How can you capitalize each word in a line using perl?

let say there is a file:
> cat temp
hi this is world


how can we make it
Hi This Is World


its very simple to do it in perl:

perl -pe '$_=~s/\b(\w)/\U$1/g;' your_file


if you want to do it inplace just add  a -i flag:

perl -pi -e '$_=~s/\b(\w)/\U$1/g;' your_file

0 comments:

Deleting the lines in the file from the end of a file perl

Thursday, January 03, 2013 , , , 0 Comments

In general if you want to delete lines from a file based on range of lines from the begining.we can do as below:
for eg:

>cat file
1
2
3
4
5
6
7
8
9
>

now if i want to print lines from 4th to 6th i do below:
tail +4 file|head -3

logic behind this is:
tail +(4th line) file|head -(6-4+1)

suppose if you want to delte the lines from 4th to 6th then:

awk 'NR<4 || NR>6' file

Suppose if we want to delete last 5 lines in a file use below:

tail -r temp | nawk 'NR>5'|tail -r

Now the tricky thing is what if i want to delete the lines from the last i.e., the last 4th line till the last 6th line.

Below is the logic:

perl -lne 'push(@a,$_);if(eof){splice @a,$.-6,6-4+1;print join "\n",@a}' file

Logic behind this is :
perl -lne 'push(@a,$_);if(eof){splice @a,$.-(last sixth line),(last sixth line number which is 6)-(last 4th line number which is 4)+1;print join "\n",@a}' file

0 comments:

Using Join Command in Unix

Wednesday, January 02, 2013 3 Comments

Join command is one of the text processing utility in Unix/Linux. Join command is used to combine two files based on a matching fields in the files. If you know SQL, the join command is similar to joining two tables in a database.

The syntax of join command is

join [options] file1 file2

The join command options are
-1 field number : Join on the specified field number in the first file
-2 field number : Join on the specified field number in the second file
-j field number : Equivalent to -1 fieldnumber and -2 fieldnumber
-o list : displays only the specified fields from both the files
-t char : input and output field delimiter
-a filenumber : Prints non matched lines in a file
-i : ignore case while joining


Unix Join Command Examples

1. Write a join command to join two files on the first field?

The basic usage of join command is to join two files on the first field. By default the join command matches the files on the first fields when we do not specify the field numbers explicitly. Let's say we have two files emp.txt and dept.txt

> cat emp.txt
10 mark
10 steve
20 scott
30 chris


> cat dept.txt
10 hr
20 finance
30 db


Here we will join on the first field and see the output. By default, the join command treats the field delimiter as space or tab.
> join emp.txt dept.txt
10 mark hr
10 steve hr
20 scott finance
30 chris db


Important Note: Before joining the files, make sure to sort the fields on the joining fields. Otherwise you will get incorrect result.

2. Write a join command to join the two files? Here use the second field from the first file and the first field from the second file to join.
In this example, we will see how to join two files on different fields rather than the first field. For this consider the below two files as an example

> cat emp.txt
mark 10 1
steve 10 1
scott 20 2
chris 30 3


> cat dept.txt
10 hr 1
20 finance 2
30 db 3


From the above, you can see the join fields are the second field from the emp.txt and the first field from the dept.txt. The join command to match these two files is

> join -1 2 -2 1 emp.txt dept.txt
10 mark 1 hr 1
10 steve 1 hr 1
20 scott 2 finance 2
30 chris 3 db 3


You can also see that the two files can also be joined on the third filed. As the both the files have the matching join field, you can use the j option in the join command.
Here -1 2 specifies the second field from the first file (emp.txt) and -2 1 specifies the first field from the second file (dept.txt)

> join -j 3 emp.txt dept.txt
1 mark 10 10 hr
1 steve 10 10 hr
2 scott 20 20 finance
3 chris 30 30 db


3. Write a join command to select the required fields from the input files in the output? Select first filed from first file and second field from second file in the output.
By default, the join command prints all the fields from both the files (except the join field is printed once). We can choose what fields to be printed on the terminal with the -o option. We will use the same files from the above example.

> join -o 1.1 2.2 -1 2 -2 1 emp.txt dept.txt
mark hr
steve hr
scott finance
chris db


Here 1.1 means in the first file select the first field. Similarly, 2.2 means in the second file select the second field

4. Write a command to join two delimited files? Here the delimiter is colon (:)
So far we have joined files with space delimiter. Here we will see how to join files with a colon as delimiter. Consider the below two files.

> cat emp.txt
mark:10
steve:10
scott:20
chris:30


> cat dept.txt
10:hr
20:finance
30:db

The -t option is used to specify the delimiter. The join command for joining the files is

> join -t: -1 2 -2 1 emp.txt dept.txt
10:mark:hr
10:steve:hr
20:scott:finance
30:chris:db


5. Write a command to ignore case when joining the files?
If the join fields are in different cases, then the join will not be performed properly. To ignore the
case in join use the -i option.

> cat emp.txt
mark,A
steve,a
scott,b
chris,C


> cat dept.txt
a,hr
B,finance
c,db


> join -t, -i -1 2 -2 1 emp.txt dept.txt
A,mark,hr
a,steve,hr
b,scott,finance
C,chris,db


6. Write a join command to print the lines which do not match the values in joining fields?
By default the join command prints only the matched lines from both the files which means prints the matched lines that passed the join condition. We can use the -a option to print the non-matched lines.

> cat P.txt
A 1
B 2
C 3


> cat Q.txt
B 2
C 3
D 4


Print non pairable lines from first file.

> join -a 1 P.txt Q.txt
A 1
B 2 2
C 3 3


Print non pairable lines from second file.

> join -a 2 P.txt Q.txt
B 2 2
C 3 3
D 4


Print non pairable lines from both file.

> join -a 1 -a 2 P.txt Q.txt
A 1
B 2 2
C 3 3
D 4

3 comments: