369

This line worked until I had whitespace in the second field.

svn status | grep '\!' | gawk '{print $2;}' > removedProjs

is there a way to have awk print everything in $2 or greater? ($3, $4.. until we don't have anymore columns?)

I suppose I should add that I'm doing this in a Windows environment with Cygwin.

Joaquin
  • 1,307
  • 2
  • 11
  • 23
Andy
  • 38,684
  • 13
  • 64
  • 66
  • 11
    As an aside, the [`grep | awk` is an antipattern](http://www.iki.fi/era/unix/award.html#grep) -- you want `awk '/!/ { print $2 }'` – tripleee Sep 18 '15 at 09:34
  • 4
    Unix "cut" is easier... `svn status | grep '\!' | cut -d' ' -f2- > removedProjs` – roblogic Sep 02 '16 at 03:23
  • Possible duplicate of [print rest of the fields in awk](http://stackoverflow.com/questions/18457486/print-rest-of-the-fields-in-awk) – acm Mar 15 '17 at 08:25
  • @tripleee: I'm so happy that you mentioned this - I'm frustrated at seeing it everywhere! – Graham Nicholls Nov 05 '18 at 16:07

24 Answers24

561

Print all columns:

awk '{print $0}' somefile

Print all but the first column:

awk '{$1=""; print $0}' somefile

Print all but the first two columns:

awk '{$1=$2=""; print $0}' somefile
Boris
  • 7,044
  • 6
  • 62
  • 63
zed_0xff
  • 29,646
  • 7
  • 48
  • 70
  • @zed very nice, just make sure you use `$1` for anything you might need, before changing it. – Peter Ajtai Jul 07 '12 at 06:10
  • 102
    gotcha: leaves a leading space dangling about :( – raphinesse Jan 08 '13 at 03:09
  • 1
    `cat somefile` can be always replaced with `< somefile` at the end of `awk` command – Shiplu Mokaddim Aug 27 '13 at 07:34
  • 53
    @raphinesse you can fix that with `awk '{$1=""; print substr($0,2)}' input_filename > output_filename` – themiurgo Sep 12 '13 at 15:07
  • 7
    This doesn't work with non-whitespace delimiters, replaces them with a space. – Dejan Oct 31 '13 at 19:28
  • 1
    Another thing to note (when used with the default separator): folds spans of multiple embedded spaces into a single space each. If this is undesired or not even a concern, here's a simpler, more easily generalized solution to get everything from the 3rd field - simply replace `{2}` with the `{}`-enclosed number of preceding fields to eliminate; adapted from @savvadia's answer: `awk '{sub(/^[ ]*([^ ]+ +){2}/, ""); print $0}` – mklement0 Jan 21 '14 at 16:05
  • 1
    Great answer. This prints all but last column: `cat somefile | awk '{$NF=""; print $0}'` – lucasart Aug 21 '14 at 09:37
  • 1
    Exactly what I needed! Now, if I want to exclude columns 1-8, is there a way to put a range in there? Rather than `$1=$2=$3=$4=$5=$6=$7=$8=""` – onebree Aug 13 '15 at 15:14
  • 4
    For non-whitespace delimiters, you can specify the Output Field Separator (OFS), e.g. to a comma: `awk -F, -vOFS=, '{$1=""; print $0}'` You will end up with an initial delimiter (`$1` is still included, just as an empty string). You can strip that with `sed` though: `awk -F, -vOFS=, '{$1=""; print $0}' | sed 's/^,//'` – cherdt Jul 07 '16 at 23:55
  • This answer is not correct. It will only replace the first and second fields with an empty space. – Jithu Paul Jun 21 '18 at 15:39
  • For multiple lines, and space in names, see answer: https://stackoverflow.com/a/49130247/3154883 – Brother Aug 14 '18 at 13:06
  • 1
    @raphinesse - Tried in MobaXterm (a cygwin derivative) so not sure if it will work in all cases, but do `awk '{$1=$2="\b"; print $0}' somefile` will remove the leading white space. Works with `awk '{$1=$2=$3=$4"\b"; print $0}' somefile` as well – JayRugMan Oct 19 '18 at 15:59
  • 1
    You can also do it with non-whitespace delimiters with `awk -F'-' 'BEGIN{OFS=FS};{$1=$2="\b"; print $0}'` – JayRugMan Oct 19 '18 at 19:44
  • 2
    AWK is like the overly literal genie who grants three wishes – Ryan Ward Oct 30 '18 at 23:33
  • @JayRugMan, the `$1=$2="\b"` trick works nicely visually. However, I found that if you do further piping commands (such as `cut`), they may not work properly because the `\b` will be passed on. You can see it with `| tr '\b' '_'`. – wisbucky Jul 15 '19 at 20:47
  • 1
    `awk '{$1=""; printf substr($0,1)"\n"}'` worked well for me -- removed the leading space. – The Tomahawk Nov 13 '20 at 14:46
  • to remove leading space you can also just use gsub : `awk '/>/ {$1=""; gsub(/^ /,""); print}' somefile` – Salix May 17 '21 at 20:52
110

There's a duplicate question with a simpler answer using cut:

 svn status |  grep '\!' | cut -d\  -f2-

-d specifies the delimeter (space), -f specifies the list of columns (all starting with the 2nd)

Community
  • 1
  • 1
Joshua Goldberg
  • 4,107
  • 1
  • 26
  • 38
  • You can also use "-b" to specify the position (from the Nth character onwards). – Dakatine Sep 10 '13 at 13:56
  • As a note, although this performs the same task as the `awk` version, there are line buffering issues with `cut`, which `awk` doesn't have: http://stackoverflow.com/questions/14360640/tail-f-into-grep-into-cut-not-working-properly – sdaau Nov 26 '13 at 19:24
  • 28
    Nice and simple, but comes with a caveat: `awk` treats multiple adjacent space chars. as a *single* separator, while `cut` does not; also - although this is not a problem in the case at hand - `cut` only accepts a single, literal char. as the delimiter, whereas `awk` allows a regex. – mklement0 Jan 21 '14 at 14:55
  • 1
    Based on this: https://stackoverflow.com/a/39217130/8852408, is probable that this solution isn't very efficient. – Joaquin Jul 19 '18 at 03:32
  • @Joaquin I upvoted your comment but then ran some quick, non-scientific benchmarks on a log file of 120MB: (`time cut -d\ -f2- logfile.txt > /dev/null` vs. `time awk '{$1=""; print $0}' logfile.txt > /dev/null`). The `cut` command (without any `grep`) was consistently faster than the `awk` equivalent (average time of `cut` was 70% of the `awk` command). It looks like `cut` is slower at "seeking" though a file to get to a certain line -- but is efficient at processing each line at a time. – Anthony Geoghegan Jan 21 '21 at 20:09
98

You could use a for-loop to loop through printing fields $2 through $NF (built-in variable that represents the number of fields on the line).

Edit: Since "print" appends a newline, you'll want to buffer the results:

awk '{out=""; for(i=2;i<=NF;i++){out=out" "$i}; print out}'

Alternatively, use printf:

awk '{for(i=2;i<=NF;i++){printf "%s ", $i}; printf "\n"}'
Zenexer
  • 16,313
  • 6
  • 62
  • 72
VeeArr
  • 5,716
  • 2
  • 19
  • 44
  • So I tried this, but think I'm missing something.. here is what I did svn status | grep '\!' | gawk '{for (i=1; i<=$NF; i++)print $i " ";}' > removedProjs – Andy Jun 02 '10 at 21:35
  • Since print appends a newline, you'll want to buffer the results. See my edit. – VeeArr Jun 02 '10 at 21:53
  • When I do that I get a msg that says ^ unterminated string – Andy Jun 02 '10 at 22:02
  • heres my exact line: svn status | grep '\!' | gawk "{out=""; for(i=2; i<= NF; i++){out=$out" "$i;} print $out}" > removedProjs – Andy Jun 02 '10 at 22:03
  • It is because you are using " " to enclose your awk script. Use ' ' instead. Also, it may be simpler to just use printf instead of buffering the result. – VeeArr Jun 02 '10 at 22:11
  • K, figured out that the error is because I use double quotes, but I think something with cygwin doesn't like single quotes, cause then it says it is missing the file. – Andy Jun 02 '10 at 22:16
  • 1
    I like this answer better because it shows how to loop through fields. – Edward Falk Jun 02 '11 at 18:52
  • 3
    If you want print to use a space, change the output record separator: awk '{ORS=" "; for(i=2;i – Christian Lescuyer Apr 08 '12 at 08:10
  • At least on my machine, @ VeeArr .. Your syntax is wrong for your first code line. The reply by Wim works. You should not have '$out' (twice). It should just be 'out'. – Randy Skretka Feb 22 '13 at 22:43
  • Very good solution because it is open for extension where the highest rated answer's is not. – migu Mar 07 '13 at 09:43
  • Have you tested your answer? `awk '{out=""; for(i=2;i<=NF;i++){out=$out" "$i}; print $out}'` won't work at all. Use `awk '{out=""; for(i=2;i<=NF;i++){out=out" "$i}; print out}'` instead. – zeekvfu Jul 25 '14 at 07:30
  • 5
    There will always be some spaces too much. This works better: `'{for(i=11;i<=NF-1;i++){printf "%s ", $i}; print $NF;}'` No leading or trailing spaces. – Marki May 14 '17 at 18:09
  • Awesome and really elegant solution. – Alchemist Dec 29 '17 at 06:53
26
awk '{out=$2; for(i=3;i<=NF;i++){out=out" "$i}; print out}'

My answer is based on the one of VeeArr, but I noticed it started with a white space before it would print the second column (and the rest). As I only have 1 reputation point, I can't comment on it, so here it goes as a new answer:

start with "out" as the second column and then add all the other columns (if they exist). This goes well as long as there is a second column.

Community
  • 1
  • 1
Wim
  • 487
  • 1
  • 8
  • 15
18

Most solutions with awk leave an space. The options here avoid that problem.

Option 1

A simple cut solution (works only with single delimiters):

command | cut -d' ' -f3-

Option 2

Forcing an awk re-calc sometimes remove the added leading space (OFS) left by removing the first fields (works with some versions of awk):

command | awk '{ $1=$2="";$0=$0;} NF=NF'

Option 3

Printing each field formatted with printf will give more control:

$ in='    1    2  3     4   5   6 7     8  '
$ echo "$in"|awk -v n=2 '{ for(i=n+1;i<=NF;i++) printf("%s%s",$i,i==NF?RS:OFS);}'
3 4 5 6 7 8

However, all previous answers change all repeated FS between fields to OFS. Let's build a couple of option that do not do that.

Option 4 (recommended)

A loop with sub to remove fields and delimiters at the front.
And using the value of FS instead of space (which could be changed).
Is more portable, and doesn't trigger a change of FS to OFS: NOTE: The ^[FS]* is to accept an input with leading spaces.

$ in='    1    2  3     4   5   6 7     8  '
$ echo "$in" | awk '{ n=2; a="^["FS"]*[^"FS"]+["FS"]+";
  for(i=1;i<=n;i++) sub( a , "" , $0 ) } 1 '
3     4   5   6 7     8

Option 5

It is quite possible to build a solution that does not add extra (leading or trailing) whitespace, and preserve existing whitespace(s) using the function gensub from GNU awk, as this:

$ echo '    1    2  3     4   5   6 7     8  ' |
  awk -v n=2 'BEGIN{ a="^["FS"]*"; b="([^"FS"]+["FS"]+)"; c="{"n"}"; }
          { print(gensub(a""b""c,"",1)); }'
3     4   5   6 7     8 

It also may be used to swap a group of fields given a count n:

$ echo '    1    2  3     4   5   6 7     8  ' |
  awk -v n=2 'BEGIN{ a="^["FS"]*"; b="([^"FS"]+["FS"]+)"; c="{"n"}"; }
          {
            d=gensub(a""b""c,"",1);
            e=gensub("^(.*)"d,"\\1",1,$0);
            print("|"d"|","!"e"!");
          }'
|3     4   5   6 7     8  | !    1    2  !

Of course, in such case, the OFS is used to separate both parts of the line, and the trailing white space of the fields is still printed.

NOTE: [FS]* is used to allow leading spaces in the input line.

13

I personally tried all the answers mentioned above, but most of them were a bit complex or just not right. The easiest way to do it from my point of view is:

awk -F" " '{ for (i=4; i<=NF; i++) print $i }'
  1. Where -F" " defines the delimiter for awk to use. In my case is the whitespace, which is also the default delimiter for awk. This means that -F" " can be ignored.

  2. Where NF defines the total number of fields/columns. Therefore the loop will begin from the 4th field up to the last field/column.

  3. Where $N retrieves the value of the Nth field. Therefore print $i will print the current field/column based based on the loop count.

koullislp
  • 441
  • 4
  • 11
10
awk '{ for(i=3; i<=NF; ++i) printf $i""FS; print "" }'

lauhub proposed this correct, simple and fast solution here

ajendrex
  • 123
  • 1
  • 7
8

This was irritating me so much, I sat down and wrote a cut-like field specification parser, tested with GNU Awk 3.1.7.

First, create a new Awk library script called pfcut, with e.g.

sudo nano /usr/share/awk/pfcut

Then, paste in the script below, and save. After that, this is how the usage looks like:

$ echo "t1 t2 t3 t4 t5 t6 t7" | awk -f pfcut --source '/^/ { pfcut("-4"); }'
t1 t2 t3 t4

$ echo "t1 t2 t3 t4 t5 t6 t7" | awk -f pfcut --source '/^/ { pfcut("2-"); }'
t2 t3 t4 t5 t6 t7

$ echo "t1 t2 t3 t4 t5 t6 t7" | awk -f pfcut --source '/^/ { pfcut("-2,4,6-"); }'
t1 t2 t4 t6 t7

To avoid typing all that, I guess the best one can do (see otherwise Automatically load a user function at startup with awk? - Unix & Linux Stack Exchange) is add an alias to ~/.bashrc; e.g. with:

$ echo "alias awk-pfcut='awk -f pfcut --source'" >> ~/.bashrc
$ source ~/.bashrc     # refresh bash aliases

... then you can just call:

$ echo "t1 t2 t3 t4 t5 t6 t7" | awk-pfcut '/^/ { pfcut("-2,4,6-"); }'
t1 t2 t4 t6 t7

Here is the source of the pfcut script:

# pfcut - print fields like cut
#
# sdaau, GNU GPL
# Nov, 2013

function spfcut(formatstring)
{
  # parse format string
  numsplitscomma = split(formatstring, fsa, ",");
  numspecparts = 0;
  split("", parts); # clear/initialize array (for e.g. `tail` piping into `awk`)
  for(i=1;i<=numsplitscomma;i++) {
    commapart=fsa[i];
    numsplitsminus = split(fsa[i], cpa, "-");
    # assume here a range is always just two parts: "a-b"
    # also assume user has already sorted the ranges
    #print numsplitsminus, cpa[1], cpa[2]; # debug
    if(numsplitsminus==2) {
     if ((cpa[1]) == "") cpa[1] = 1;
     if ((cpa[2]) == "") cpa[2] = NF;
     for(j=cpa[1];j<=cpa[2];j++) {
       parts[numspecparts++] = j;
     }
    } else parts[numspecparts++] = commapart;
  }
  n=asort(parts); outs="";
  for(i=1;i<=n;i++) {
    outs = outs sprintf("%s%s", $parts[i], (i==n)?"":OFS); 
    #print(i, parts[i]); # debug
  }
  return outs;
}

function pfcut(formatstring) {
  print spfcut(formatstring);
}
Community
  • 1
  • 1
sdaau
  • 32,015
  • 34
  • 178
  • 244
6

Would this work?

awk '{print substr($0,length($1)+1);}' < file

It leaves some whitespace in front though.

mklement0
  • 245,023
  • 45
  • 419
  • 492
whaley
  • 15,524
  • 9
  • 54
  • 66
5

Printing out columns starting from #2 (the output will have no trailing space in the beginning):

ls -l | awk '{sub(/[^ ]+ /, ""); print $0}'
savvadia
  • 161
  • 2
  • 5
  • 1
    Nice, though you should add `+` after the space, since the fields may be separated by more than 1 space (`awk` treats multiple adjacent spaces as a single separator). Also, `awk` will ignore leading spaces, so you should start the regex with `^[ ]*`. With space as the separator you could even generalize the solution; e.g., the following returns everything from the 3rd field: `awk '{sub(/^[ ]*([^ ]+ +){2}/, ""); print $0}'` It gets trickier with arbitrary field separators, though. – mklement0 Jan 21 '14 at 16:04
4
echo "1 2 3 4 5 6" | awk '{ $NF = ""; print $0}'

this one uses awk to print all except the last field

Birei
  • 33,968
  • 2
  • 69
  • 79
3

This is what I preferred from all the recommendations:

Printing from the 6th to last column.

ls -lthr | awk '{out=$6; for(i=7;i<=NF;i++){out=out" "$i}; print out}'

or

ls -lthr | awk '{ORS=" "; for(i=6;i<=NF;i++) print $i;print "\n"}'
Manuel Parra
  • 103
  • 1
  • 6
2

If you need specific columns printed with arbitrary delimeter:

awk '{print $3 "  " $4}'

col#3 col#4

awk '{print $3 "anything" $4}'

col#3anythingcol#4

So if you have whitespace in a column it will be two columns, but you can connect it with any delimiter or without it.

I159
  • 24,762
  • 27
  • 88
  • 124
2

Perl solution:

perl -lane 'splice @F,0,1; print join " ",@F' file

These command-line options are used:

  • -n loop around every line of the input file, do not automatically print every line

  • -l removes newlines before processing, and adds them back in afterwards

  • -a autosplit mode – split input lines into the @F array. Defaults to splitting on whitespace

  • -e execute the perl code

splice @F,0,1 cleanly removes column 0 from the @F array

join " ",@F joins the elements of the @F array, using a space in-between each element


Python solution:

python -c "import sys;[sys.stdout.write(' '.join(line.split()[1:]) + '\n') for line in sys.stdin]" < file

Chris Koknat
  • 2,636
  • 2
  • 25
  • 27
1

If you don't want to reformat the part of the line that you don't chop off, the best solution I can think of is written in my answer in:

How to print all the columns after a particular number using awk?

It chops what is before the given field number N, and prints all the rest of the line, including field number N and maintaining the original spacing (it does not reformat). It doesn't mater if the string of the field appears also somewhere else in the line.

Define a function:

fromField () { 
awk -v m="\x01" -v N="$1" '{$N=m$N; print substr($0,index($0,m)+1)}'
}

And use it like this:

$ echo "  bat   bi       iru   lau bost   " | fromField 3
iru   lau bost   
$ echo "  bat   bi       iru   lau bost   " | fromField 2
bi       iru   lau bost 

Output maintains everything, including trailing spaces

In you particular case:

svn status | grep '\!' | fromField 2 > removedProjs

If your file/stream does not contain new-line characters in the middle of the lines (you could be using a different Record Separator), you can use:

awk -v m="\x0a" -v N="3" '{$N=m$N ;print substr($0, index($0,m)+1)}'

The first case will fail only in files/streams that contain the rare hexadecimal char number 1

Community
  • 1
  • 1
Robert Vila
  • 221
  • 3
  • 2
1

I want to extend the proposed answers to the situation where fields are delimited by possibly several whitespaces –the reason why the OP is not using cut I suppose.

I know the OP asked about awk, but a sed approach would work here (example with printing columns from the 5th to the last):

  • pure sed approach

      sed -r 's/^\s*(\S+\s+){4}//' somefile
    

    Explanation:

    • s/// is the standard command to perform substitution
    • ^\s* matches any consecutive whitespace at the beginning of the line
    • \S+\s+ means a column of data (non-whitespace chars followed by whitespace chars)
    • (){4} means the pattern is repeated 4 times.
  • sed and cut

      sed -r 's/^\s+//; s/\s+/\t/g' somefile | cut -f5-
    

    by just replacing consecutive whitespaces by a single tab;

  • tr and cut: tr can also be used to squeeze consecutive characters with the -s option.

      tr -s [:blank:] <somefile | cut -d' ' -f5-
    
PlasmaBinturong
  • 1,428
  • 13
  • 16
0

Perl:

@m=`ls -ltr dir | grep ^d | awk '{print \$6,\$7,\$8,\$9}'`;
foreach $i (@m)
{
        print "$i\n";

}
Steven Penny
  • 82,115
  • 47
  • 308
  • 348
pkm
  • 2,473
  • 1
  • 22
  • 41
  • 1
    This doesn't answer the question, which generalises the requirement to _printing from the Nth column to the end_. – roaima Nov 12 '15 at 10:49
0

This would work if you are using Bash and you could use as many 'x ' as elements you wish to discard and it ignores multiple spaces if they are not escaped.

while read x b; do echo "$b"; done < filename
Steven Penny
  • 82,115
  • 47
  • 308
  • 348
0

This awk function returns substring of $0 that includes fields from begin to end:

function fields(begin, end,    b, e, p, i) {
    b = 0; e = 0; p = 0;
    for (i = 1; i <= NF; ++i) {
        if (begin == i) { b = p; }
        p += length($i);
        e = p;
        if (end == i) { break; }
        p += length(FS);
    }
    return substr($0, b + 1, e - b);
}

To get everything starting from field 3:

tail = fields(3);

To get section of $0 that covers fields 3 to 5:

middle = fields(3, 5);

b, e, p, i nonsense in function parameter list is just an awk way of declaring local variables.

wonder.mice
  • 6,167
  • 2
  • 33
  • 37
-1
ls -la | awk '{o=$1" "$3; for (i=5; i<=NF; i++) o=o" "$i; print o }'

from this answer is not bad but the natural spacing is gone.
Please then compare it to this one:

ls -la | cut -d\  -f4-

Then you'd see the difference.

Even ls -la | awk '{$1=$2=""; print}' which is based on the answer voted best thus far is not preserve the formatting.

Thus I would use the following, and it also allows explicit selective columns in the beginning:

ls -la | cut -d\  -f1,4-

Note that every space counts for columns too, so for instance in the below, columns 1 and 3 are empty, 2 is INFO and 4 is:

$ echo " INFO  2014-10-11 10:16:19  main " | cut -d\  -f1,3

$ echo " INFO  2014-10-11 10:16:19  main " | cut -d\  -f2,4
INFO 2014-10-11
$
Community
  • 1
  • 1
arntg
  • 1,383
  • 12
  • 12
-1

If you want formatted text, chain your commands with echo and use $0 to print the last field.

Example:

for i in {8..11}; do
   s1="$i"
   s2="str$i"
   s3="str with spaces $i"
   echo -n "$s1 $s2" | awk '{printf "|%3d|%6s",$1,$2}'
   echo -en "$s3" | awk '{printf "|%-19s|\n", $0}'
done

Prints:

|  8|  str8|str with spaces 8  |
|  9|  str9|str with spaces 9  |
| 10| str10|str with spaces 10 |
| 11| str11|str with spaces 11 |
Steven Penny
  • 82,115
  • 47
  • 308
  • 348
syntax
  • 173
  • 1
  • 2
  • 10
-1

Awk examples looks complex here, here is simple Bash shell syntax:

command | while read -a cols; do echo ${cols[@]:1}; done

Where 1 is your nth column counting from 0.


Example

Given this content of file (in.txt):

c1
c1 c2
c1 c2 c3
c1 c2 c3 c4
c1 c2 c3 c4 c5

here is the output:

$ while read -a cols; do echo ${cols[@]:1}; done < in.txt 

c2
c2 c3
c2 c3 c4
c2 c3 c4 c5
kenorb
  • 118,428
  • 63
  • 588
  • 624
-1

I wasn't happy with any of the awk solutions presented here because I wanted to extract the first few columns and then print the rest, so I turned to perl instead. The following code extracts the first two columns, and displays the rest as is:

echo -e "a  b  c  d\te\t\tf g" | \
  perl -ne 'my @f = split /\s+/, $_, 3; printf "first: %s second: %s rest: %s", @f;'

The advantage compared to the perl solution from Chris Koknat is that really only the first n elements are split off from the input string; the rest of the string isn't split at all and therefor stays completely intact. My example demonstrates this with a mix of spaces and tabs.

To change the amount of columns that should be extracted, replace the 3 in the example with n+1.

Community
  • 1
  • 1
Martin von Wittich
  • 238
  • 1
  • 6
  • 16
-9

Because of a wrong most upvoted anwser with 340 votes, I just lost 5 minutes of my life! Did anybody try this answer out before upvoting this? Apparantly not. Completely useless.

I have a log where after $5 with an IP address can be more text or no text. I need everything from the IP address to the end of the line should there be anything after $5. In my case, this is actualy withn an awk program, not an awk oneliner so awk must solve the problem. When I try to remove the first 4 fields using the most upvoted but completely wrong answer:

echo "  7 27.10.16. Thu 11:57:18 37.244.182.218" | awk '{$1=$2=$3=$4=""; printf "[%s]\n", $0}'

it spits out wrong and useless response (I added [..] to demonstrate):

[    37.244.182.218 one two three]

There are even some sugestions to combine substr with this wrong answer. Like that complication is an improvement.

Instead, if columns are fixed width until the cut point and awk is needed, the correct answer is:

echo "  7 27.10.16. Thu 11:57:18 37.244.182.218" | awk '{printf "[%s]\n", substr($0,28)}'

which produces the desired output:

[37.244.182.218 one two three]
Pila
  • 91
  • 1
  • 5