Interesting perl One-liner

Monday, December 31, 2012 , 0 Comments

How do you double space every line in a file.I mean add an extra newline in between files.
I would have done this below:

perl -pe 's/$/\n/' your_file

I also feel that this is a clean way to do.Use a regex,replace every $(end of line) with a new line.

well there is a much coller way i found,

do you know that there is awk's ORS equivalent in perl too?
yes there is.

$\ in perl = ORS in awk

so now if i do the below,it does the same thing as the first command i mentioned at the top.

perl -pe '$\="\n"'

0 comments:

Return type of malloc

Monday, December 31, 2012 , 0 Comments

int *ptr = malloc(sizeof(int)*length);
int *ptr = (int *)malloc(sizeof(int)*length);
Which one is correct?
Its the first one. You don't cast the result, since:
It is unnecessary, as void * is automatically and safely promoted to any other
pointer type in this case.

It can hide an error, if you forgot to include <stdlib.h>. 
This can cause crashes, in the worst case.

It adds clutter to the code, casts are not very easy to read (especially if the pointer type is long).

It makes you repeat yourself, which is generally bad.
As a clarification, note that I said "you don't cast", not "you don't need to cast". In my opinion, it's a failure to include the cast, even if you got it right. There are simply no benefits to doing it, but a bunch of potential risks, and including the cast indicates that you don't know about the risks. Also note, as commentators point out, that the above changes for straight C, not C++. I very firmly believe in C and C++ as separate languages.

0 comments:

Moving a process from background to foreground

Monday, December 31, 2012 , 0 Comments

In unix, if we run a process(a shell script).It will not return to the terminal untill the script ends.
lets say there is a script temp.sh
>cat temp.sh
#!/bin/sh
for i in 2 3 3 3 45
do
sleep 10
echo $i
done
This process will sleep for 10 seconds for every iteration and prints the value of $i.
If I run the process as below:
> ./temp.sh
2
3
3
3
45
>
If you see above ,you can observe that the script took 50 seconds to return to the terminal.
So better for us to run the process in the background.
> ./temp.sh &
[1] 20323
>
The number which is present in the square braces is the job id and the number outside the square braces is the process id(pid) Now if you see ps output you can see the process as running or also in the output of jobs
Yes,the command jobs will return all the background processes that were stared by you.
>jobs
[1] + Running ./temp.sh
[2] - Running ./temp.sh
[3] Running ./temp.sh
[4] Running ./temp.sh
fg is the command to bring it back to the foreground as shown below.
>fg 1
Now press CTRL+c. As seen above I have ended the process and it no longer exists. Now if I again run the command jobs
>jobs
[2]  + Running                       ./temp.sh
[3]    Running                       ./temp.sh
[4]  - Running                       ./temp.sh
>

0 comments:

Join every two lines in a file

Sunday, December 30, 2012 , 0 Comments

If we have an input file like

1
2
3
4
5
6
7

How can we join every two lines so that the output will be:

1 2
3 4
5 6
7

Below is a very simple solution for this:

paste - - <your_file
awk:

awk '{if(NR%2==0){line=line" "$0;print line;}else{line=$0}}END{if(NR%2)print line}' your_file


0 comments:

Compare two files and replace

Sunday, December 30, 2012 , 0 Comments

This was one crazy question asked in a quiz in one of the competitions in the place where I work:
They wanted this to be solved using perl only and ASAP. So I came up with a one liner.

File1
Hello HELLO
world WORLD
good GOOD
File2
Hello all this is just
a text file to demonstrate
some good programming. Welcome
to the world of programming.

Now the words of the first column of the first file,if they are present in the second file then they
should be replaced with what ever is there in the second column of the first file.

So the output would look like:

HELLO all this is just
a text file to demonstrate
some GOOD programming. Welcome
to the WORLD of programming.

Below is the solution that i wrote in perl:

perl -F -lane 'BEGIN{$count=0;$flag=0}
               if($flag==1)
               {$count=1}
               $X{$F[0]}=$F[1];
               if(eof && $count!=1)
               {foreach(%X){$H{$_}=$X{$_}};
               $flag=1}
               foreach(@F){if(exists($H{$_})){$_=$H{$_};}}
               if($count!=0){print "@F"}' 
               File1 File2

0 comments:

Print lines after every n lines in a file

Sunday, December 30, 2012 0 Comments


Input has these lines:

    line 1 
    line 2
    line 3
    line 4 
    line 5
    line 6
    line 7
    line 8
    line 9
    line 10

how to write a script that prints only every 4 lines, in the case of the example input above:

    line 1
    line 5
    line 9

Solution for this is:

awk 'NR % 4 == 1'

0 comments:

Delete duplicate lines in a file in Unix

Sunday, December 30, 2012 0 Comments

Below is a simple way to delete duplicate lines from a file:

awk '!x[$0]++' file.txt

Explanation:

Each an every line is inserted in a hash(an assosiative array) when it is not a key in that hash. And when that key is already available in the assosiative array the value of that key incremented by 1.

so ! here means when the value of the key is 0(means key does not exist) it will turn out to be true and the line will be prited. but if there is already a key existing in the hash, it will be incremented by 1 and "!" will make the overall value to false which will not let the line to be printed.

0 comments:

Delete comments from a C/C++ source file

Sunday, December 30, 2012 , 0 Comments

Solution can be many but below is a good one:
cpp -P -fpreprocessed t.c | grep -v "^[ \t]*$"
another is a complex one . For example, this one-liner
perl -0777 -pe 's{/\*.*?\*/}{}gs' foo.c
will work in many but not all cases. You see, it's too simple-minded for certain kinds of C programs, in particular, those with what appear to be comments in quoted strings. For that, you'd need something like this, created by Jeffrey Friedl and later modified by Fred Curtis.
$/ = undef;
$_ = <>;
s#/\*[^*]*\*+([^/*][^*]*\*+)*/|("(\\.|[^"\\])*"|'(\\.|[^'\\])*'|.[^/"'\\]*)#defined $2 ? $2 : ""#gse;
print;
Reference for this is taken from perlfaq6

0 comments:

Searching multiple strings in VI editor

Sunday, December 30, 2012 , , 0 Comments

Searching three strings at a time
/string1\|string2\|string3

Search and replace a string:
%s/source/target/g

How do merge two lines in command mode:
eg:
1
2

changes to
1 2

Below are the steps to perform the same:

Open the file in VI
Set the command mode
Go to line 1(not need to be at the end of the line.cursor can be anywhere)
Now press [SHIFT]+j

0 comments:

Understanding C++ polymorphism

Sunday, December 30, 2012 , 0 Comments


Understanding of / requirements for polymorphism

To understand polymorphism - as the term is used in Computing Science - it helps to start from a simple test for and definition of it. Consider:
    Type1 x;
    Type2 y;

    f(x);
    f(y);
Here, f() is to perform some operation and is being given values x and y as inputs. To exhibit polymorphism, f() must be able to operate with values of at least two distinct types (e.g. int anddouble), finding and executing type-appropriate code.

C++ mechanisms for polymorphism

Explicit programmer-specified polymorphism

You can write f() such that it can operate on multiple types in any of the following ways:
  • Preprocessing:
    #define f(X) ((X) += 2)
    // (note: in real code, use a longer uppercase name for a macro!)
  • Overloading:
    void f(int& x)    { x += 2; }
    
    void f(double& x) { x += 2; }
  • Templates:
    template <typename T>
    void f(T& x) { x += 2; }
  • Virtual dispatch:
    struct Base { virtual Base& operator+=(int) = 0; };
    
    struct X : Base
    {
        X(int n) : n_(n) { }
        X& operator+=(int n) { n_ += n; return *this; }
        int n_;
    };
    
    struct Y : Base
    {
        Y(double n) : n_(n) { }
        Y& operator+=(int n) { n_ += n; return *this; }
        double n_;
    };
    
    void f(Base& x) { x += 2; } // run-time polymorphic dispatch

Other related mechanisms

Compiler-provided polymorphism for builtin types, Standard conversions, and casting/coercion are discussed later for completeness as:
  • they're commonly intuitively understood anyway (warranting a "oh, that" reaction),
  • they impact the threshold in requiring, and seamlessness in using, the above mechanisms, and
  • usefully detailed explanation is a fiddly distraction from more important concepts.

Terminology

Further categorisation

Given the polymorphic mechanisms above, we can categorise them in various ways:
  • When is the polymorphic type-specific code selected?
    • Run time means the compiler must generate code for all the types the program might handle while running, and at run-time the correct code is selected (virtual dispatch)
    • Compile time means the choice of type-specific code is made during compilation, and code not used may not even be compiled (every mechanism except virtual dispatch)
  • Which types are supported?
    • Ad-hoc meaning you must provide explicit code to support each type (e.g. overloading); you explicitly add support "for this" (as per ad hoc's meaning) type, some other "this", and maybe "that" too ;-).
    • Parametric meaning you can just try to use the function for various parameter types without specifically doing anything to enable its support for them (e.g. templates, macros). An object with functions/operators that act like the template/macro expects is all that template/macro needs to do its job, with the exact type being irrelevant. The "concepts" cut from C++11 help express and enforce such expectations - let's hope they make it into the next Standard.
      • Parametric polymorphism provides duck typing - a concept attributed to James Whitcomb Riley who apparently said "When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck.".
        template <typename Duck>
        void do_ducky_stuff(const Duck& x) { x.walk().swim().quack(); }
        
        do_ducky_stuff(Vilified_Cygnet());

"Polymorphic"

Alf Steinbach comments that in the C++ Standard polymorphic only refers to run-time polymorphism using virtual dispatch. General Comp. Sci. meaning is more inclusive, as per C++ creator Bjarne Stroustrup's glossary (http://www2.research.att.com/~bs/glossary.html):
polymorphism - providing a single interface to entities of different types. virtual functions provide dynamic (run-time) polymorphism through an interface provided by a base class. Overloaded functions and templates provide static (compile-time) polymorphism. TC++PL 12.2.6, 13.6.1, D&E 2.9.
This answer - like the question - relates C++ features to the Comp. Sci. terminology.

Discussion

With the C++ Standard using a narrower definition of "polymorphism" than the Comp. Sci. community, to ensure mutual understanding for your audience consider...
  • using unambiguous terminology ("can we make this code reusable for other types?" or "can we use virtual dispatch?" vs "can we make this code polymorphic?"), and/or
  • clearly defining your terminology.
Still, what's crucial to being a great C++ programmer is understanding what polymorphism's really doing for you...
    letting you write "algorithmic" code once and then apply it to many types of data
...and then be very aware of how different polymorphic mechanisms match your actual needs.
Run-time polymorphism suits:
  • input processed by factory methods and spat out as an heterogeneous object collection handled viaBase*s,
  • implementation chosen at runtime based on config files, command line switches, UI settings etc.,
  • implementation varied at runtime, such as for a state machine pattern,
  • pImpl idiom / managing re-compilation coupling.
When there's not a clear driver for run-time polymorphism, compile-time options are often preferable. Consider:
  • the compile-what's-called aspect of templated classes is preferable to fat interfaces failing at runtime
  • SFINAE
  • CRTP
  • optimisations (many including inlining and dead code elimination, loop unrolling, static stack-based arrays vs heap)
  • __FILE____LINE__, string literal concatenation and other unique capabilities of macros (which remain evil ;-))

Other mechanisms supporting polymorphism

As promised, for completeness several peripheral topics are covered:
  • compiler-provided overloads
  • conversions
  • casts/coercion
The document concludes with a discussion of how these combine to empower and simplify polymorphic code - especially parametric polymorphism (templates and macros).

Mechanisms for mapping to type-specific operations

> Implicit compiler-provided overloads
Conceptually, the compiler overloads many operators for builtin types. It's not conceptually different from user-specified overloading, but is listed as it's easily overlooked. For example, you can add to ints and doubles using the same notation x += 2 and the compiler produces:
  • type-specific CPU instructions
  • a result of the same type.
Overloading then seemlessly extends to user-defined types:
std::string x;
int y = 0;

x += 'c';
y += 'c';
Compiler-provided overloads for basic types is common in high-level (3GL+) computer languages, and explicit discussion of polymorphism generally implies something more. (2GLs - assembly languages - often require the programmer to explicitly use different mnemonics for different types.)
> Standard conversions
The C++ Standard's fourth section describes Standard conversions.
The first point summarises nicely (from an old draft - hopefully still substantially correct):
-1- Standard conversions are implicit conversions defined for built-in types. Clause conv enumerates the full set of such conversions. A standard conversion sequence is a sequence of standard conversions in the following order:
  • Zero or one conversion from the following set: lvalue-to-rvalue conversion, array-to-pointer conversion, and function-to-pointer conversion.
  • Zero or one conversion from the following set: integral promotions, floating point promotion, integral conversions, floating point conversions, floating-integral conversions, pointer conversions, pointer to member conversions, and boolean conversions.
  • Zero or one qualification conversion.
[Note: a standard conversion sequence can be empty, i.e., it can consist of no conversions. ] A standard conversion sequence will be applied to an expression if necessary to convert it to a required destination type.
These conversions allow code such as:
double a(double x) { return x + 2; }

a(3.14);
a(42);
Applying the earlier test:
To be polymorphic, [a()] must be able to operate with values of at least two distinct types (e.g.int and double), finding and executing type-appropriate code.
a() itself runs code specifically for double and is therefore not polymorphic.
But, in the second call to a() the compiler knows to generate type-appropriate code for a "floating point promotion" (Standard §4) to convert 42 to 42.0. That extra code is in the calling function. We'll discuss the significance of this in the conclusion.
> Coercion, casts, implicit constructors
These mechanisms allow user-defined classes to specify behaviours akin to builtin types' Standard conversions. Let's have a look:
int a, b;

if (std::cin >> a >> b)
    f(a, b);
Here, the object std::cin is evaluated in a boolean context, with the help of a conversion operator. This can be conceptually grouped with "integral promotions" et al from the Standard conversions in the topic above.
Implicit constructors effectively do the same thing, but are controlled by the cast-to type:
f(const std::string& x);
f("hello");  // invokes `std::string::string(const char*)`

Implications of compiler-provided overloads, conversions and coercion

Consider:
void f()
{
    typedef int Amount;
    Amount x = 13;
    x /= 2;
    std::cout << x * 1.1;
}
If we want the amount x to be treated as a real number during the division (i.e. be 6.5 rather than rounded down to 6), we only need change to typedef double Amount.
That's nice, but it wouldn't have been too much work to make the code explicitly "type correct":
void f()                               void f()
{                                      {
    typedef int Amount;                    typedef double Amount;
    Amount x = 13;                         Amount x = 13.0;
    x /= 2;                                x /= 2.0;
    std::cout << double(x) * 1.1;          std::cout << x * 1.1;
}                                      }
But, consider that we can transform the first version into a template:
template <typename Amount>
void f()
{
    Amount x = 13;
    x /= 2;
    std::cout << x * 1.1;
}
It's due to those little "convenience features" that it can be so easily instantiated for either int ordouble and work as intended. Without these features, we'd need explicit casts, type traits and/or policy classes, some verbose, error-prone mess like:
template <typename Amount, typename Policy>
void f()
{
    Amount x = Policy::thirteen;
    x /= static_cast<Amount>(2);
    std::cout << traits<Amount>::to_double(x) * 1.1;
}
So, compiler-provided operator overloading for builtin types, Standard conversions, casting / coercion / implicit constructors - they all contribute subtle support for polymorphism. From the definition at the top of this answer, they address "finding and executing type-appropriate code" by mapping:
  • "away" from parameter types
    • from the many data types polymorphic algorithmic code handles
    • to code written for a (potentially lesser) number of (the same or other) types.
  • "to" parametric types from values of constant type
They do not establish polymorphic contexts by themselves, but do help empower/simplify code inside such contexts.
You may feel cheated... it doesn't seem like much. The significance is that in parametric polymorphic contexts (i.e. inside templates or macros), we're trying to support an arbitrarily large range of types but often want to express operations on them in terms of other functions, literals and operations that were designed for a small set of types. It reduces the need to create near-identical functions or data on a per-type basis when the operation/value is logically the same. These features cooperate to add an attitude of "best effort", doing what's intuitively expected by using the limited available functions and data and only stopping with an error when there's real ambiguity.
This helps limit the need for polymorphic code supporting polymorphic code, drawing a tighter net around the use of polymorphism so localised use doesn't force widespread use, and making the benefits of polymorphism available as needed without imposing the costs of having to expose implementation at compile time, have multiple copies of the same logical function in the object code to support the used types, and in doing virtual dispatch as opposed to inlining or at least compile-time resolved calls. As is typical in C++, the programmer is given a lot of freedom to control the boundaries within which polymorphism is used.


Reference is here

0 comments: