Elegance is subjective, but I can answer at least one of your questions, and suggest some things that might shorten or improve your code.
"is there a way to load just the required part of the file at loading time" - in the code you showed, I don't see the need to load the entire file into memory. The typical pattern for processing files line-by-line, and the equivalent of what Perl's -n
and -p
switches do, is this pattern:
open my $fh, '<', $file or die "$file: $!";
while (<$fh>) { # reads line into $_
my @fields = split; # splits $_ on whitespace, like awk
my ($foo, $bar, $some, $thing) = @fields[3,8,9,15];
...
}
close $fh;
I consider that fairly elegant, but based on what you're writing I guess you're comparing that to oneliners of piped commands that fit within maybe 100 characters. Perl can do that too: as the comments have already mentioned, have a look at the switches -n
, -p
, -a
, -F
, and -i
. If you show some concrete examples of things you want to do, you'll probably get some replies showing how to do it shorter with Perl.
But if you're going to be doing more, then it's usually better to expand that into a script like the one above. IMHO putting things into a script gives you more power: it's not ephemeral like the command-line history, it's more easily extensible, and it's easier to use modules, you can add command-line options, process multiple files, and so on. Just for example, with the following snippet, you get all the power of Text::CSV
- support for quoting, escaping, multiline strings, etc.
use Text::CSV;
my $csv = Text::CSV->new({binary=>1, auto_diag=>2, eol=>$/});
open my $fh, '<', $file or die "$file: $!";
while ( my $row = $csv->getline($fh) ) {
...
$csv->print(select, $row);
}
$csv->eof or $csv->error_diag;
close $fh;
You might also want to check out that module's csv
function, which provides a lot of functionality in a short function. If you still think that's all to "painful" and "dirty" and you'd rather do stuff with less code, then there are a few shortcuts you could take, for example to slurp a whole file into memory, my $data = do { local (*ARGV, $/) = $file; <> };
, or to do the same as the -i
command-line switch:
local ($^I, @ARGV) = ('.bak', $file);
while (<>) {
# s///; or @F=split; or whatever
print; # prints $_ back out
}
One thing I like about Perl is that it lets you express yourself in lots of different ways - whether you want to hack together a really short script to take care of a one-time task, or write a big OO project, TIMTOWTDI