72

I can view the log using the following command.

aws logs get-log-events --log-group-name groupName --log-stream-name streamName --limit 100

what is the command to get feature like tail -f so that i can see the log real time

Shizzmo
  • 14,677
  • 3
  • 20
  • 15
LynAs
  • 5,503
  • 11
  • 41
  • 77
  • You can use this command line utility for Python https://pypi.org/project/qaws/ – Jurass May 25 '20 at 05:57
  • If you want the results within memory rather than console output, i.e. `boto3` instead of `awscli`, https://gist.github.com/alanyee/601f995bfd6acfd4c3c16ee7e9115ab5 – Flair Jan 07 '21 at 23:19
  • 1
    TL/DR; Since mid-2019, newer versions of `aws-cli` include: `aws logs tail $group_name` (and it supports handy options like `--since "1h"` and --follow (like `tail -f`) and `--filter-pattern "blah"` (like `grep`).) – MarkHu Jan 27 '21 at 01:16

11 Answers11

106

I was really disappointed with awslogs and cwtail so I made my own tool called Saw that efficiently streams CloudWatch logs to the console (and colorizes the JSON output):

You can install it on MacOS with:

brew tap TylerBrock/saw
brew install saw

It has a bunch of nice features like the ability to automatically expand (indent) the JSON output (try running the tool with --expand):

saw watch my_log_group --expand

Got a Lambda you want to see error logs for? No Problem:

saw watch /aws/lambda/my_func --filter error 

Saw is great because the output is easily readable and you can stream logs from entire log group, not just a single stream in the group. Filtering and watching streams with a certain prefix is also just as easy!

Tyler Brock
  • 27,248
  • 15
  • 69
  • 76
64

Note that tailing an aws log is now a supported feature of the official awscli, albeit only in awscli v2, which is not released yet. Tailing and following the logs (like tail -f) can now be accomplished by something like:

aws logs tail $group_name --follow

To install the v2 version, see the instructions on this page. It was implemented in this PR. To see it demonstrated at the last re:Invent conference, see this video.

In addition to tailing the logs, it allows viewing the logs back to a specified time using the --since parameter, which can take an absolute or relative time

aws logs tail $group_name --since 5d

To keep the v1 and v2 versions of awscli separate, I installed awscli v2 into a separate python virtual environment and activate it only when I need to use awscli v2.

Anton I. Sipos
  • 2,755
  • 1
  • 24
  • 26
  • 2
    Used this link for aws v2 : https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html , Try installing it using '-i' and '-b' option to provide custom install and binary file locations. – Ashish Sharma Dec 05 '19 at 07:52
  • this is the way to go - it greatly simplifies the usage ! – sashok_bg Jun 19 '20 at 09:13
  • 2
    Backported as a v1 plugin, for those unable to use v2 : https://pypi.org/project/awscli-plugin-logs-tail/ – Steve Jones Oct 06 '20 at 02:24
  • If you want to look at the current state of the code, it is present in the `v2` branch under https://github.com/aws/aws-cli/tree/v2/awscli/customizations/logs – Flair Jan 07 '21 at 00:48
50

Have a look at awslogs.

If you happen to be working with Lambda/API Gateway specifically, have a look at apilogs.

RyanG
  • 3,312
  • 20
  • 16
  • 2
    `awslogs` package is amazing. Solved a problem I had this morning where a business team member just wanted to "grep the logs to find stuff". Definitely the way to go for simple solutions. – Adam Link Dec 08 '17 at 18:57
  • 1
    Meh, install python for a cli application? Crappy looking output? awslogs isnt great. I am biased but I much prefer saw: https://github.com/TylerBrock/saw – Tyler Brock Jun 20 '18 at 14:23
  • 1
    Minus one point for shameless self promotion. Looks cool though! – RyanG Jun 20 '18 at 19:08
  • 1
    @TylerBrock a lot of console apps are built on Python, including the official aws-cli, so your point is not 100% valid. Reasoning aside, great tool you've built! – Hnatt Sep 17 '18 at 16:33
  • 2
    but... python comes pre-installed in a lot of unix based os :) – RicardoE Dec 18 '18 at 21:47
  • Awslogs are not bad but this one looks better to me: https://pypi.org/project/qaws/ – Jurass May 25 '20 at 05:57
  • @TylerBrock `saw` does look cool. But I have to point out that you complained about needing to install python, while `saw` is asking me to do something far more painful: `Please update to Xcode 11.3.1`. I'm on OS 10.14 still so I'd actually have to update my OS too! – totalhack Dec 03 '20 at 15:00
  • @totalhack if you get it from the releases page it should be already compiled for you, just extract the archive and run! – Tyler Brock Dec 03 '20 at 23:58
9

I've just discovered cwtail and it works well (to watch a lambda function's CloudWatch logs).

To install:

npm install -g cwtail

To list log groups:

cwtail -l

Then, once you've picked which log group to 'tail':

cwtail -f /aws/lambda/ExampleFunction
Greg Sadetsky
  • 3,995
  • 1
  • 30
  • 42
  • And if you wanted to format the json coming back from cwtail, this might work. Running in a bash shell. $ echo '{"j":[' `cwtail -e /aws/connect/carbonated | sed 's/$/,/' | sed '$ s/.$//'` ']}' | python -m json.tool – Chai Ang Sep 05 '19 at 23:37
6

Because CloudWatch logs can be delayed (i.e. not "realtime" by precise definition) you parse the previous events for the last timestamp and start the next iteration there. This script uses aws logs get-log-events for which you must specify a valid stream_name.

#!/bin/bash
    
group_name='<log-group-name>'
stream_name='<log-stream-name>'
start_seconds_ago=300

start_time=$(( ( $(date -u +"%s") - $start_seconds_ago ) * 1000 ))
while [[ -n "$start_time" ]]; do
    loglines=$(aws logs get-log-events --log-group-name "$group_name" --log-stream-name "$stream_name" --start-time $start_time --output text)
    [ $? -ne 0 ] && break
      next_start_time=$( sed -nE 's/^EVENTS.([[:digit:]]+).+$/\1/ p' <<< "$loglines" | tail -n1 )
    [ -n "$next_start_time" ] && start_time=$(( $next_start_time + 1 ))
    echo "$loglines"
    sleep 15
done

Or if you want to tail an entire log group, this script uses aws logs filter-log-events without a stream name:

#!/bin/bash

group_name='<log-group-name>'
start_seconds_ago=300
  
start_time=$(( ( $(date -u +"%s") - $start_seconds_ago ) * 1000 ))
while [[ -n "$start_time" ]]; do
    loglines=$(aws logs filter-log-events --log-group-name "$group_name" --interleaved --start-time $start_time --output text)
    [ $? -ne 0 ] && break
    next_start_time=$( sed -nE 's/^EVENTS.([^[:blank:]]+).([[:digit:]]+).+$/\2/ p' <<< "$loglines" | tail -n1 )
    [ -n "$next_start_time" ] && start_time=$(( $next_start_time + 1 ))
    echo "$loglines"
    sleep 15
done

I've also put up the scripts that I use as GitHub gists: https://gist.github.com/tekwiz/964a3a8d2d84ff4c8b5288d9a703fbce.

Warning: the above code & scripts are written for my macOS system which is customized (bastardized??) with Homebrew and GNU coreutils, so some command options may need to be tweaked for your system. Edits are welcome :)

MarkHu
  • 1,517
  • 15
  • 25
Travis Warlick
  • 608
  • 4
  • 14
6

To tail CloudWatch Logs effectively I created a tool called cw.

It's super easy to install (it supports brew, snap and scoop), fast (it targets the specific hardware architecture, no intermediate runtime) and it has a set of features that make life easier.

Your example with cw would be:

cw tail -f groupName:streamName
Luca Grulla
  • 69
  • 1
  • 1
4

I created a JetBrains plugin called awstail to do this :)

godzsa
  • 1,691
  • 1
  • 27
  • 51
1

You can use awslogs, a python package to tail aws logwatch logs.

Install it with

pip install awslogs

List all the groups with

awslogs groups        

Then select a stream and watch it with

awslogs get staging-cluster --watch

You can also filter logs with matching patterns.

# tail logs of a cluster
awslogs get staging-cluster --watch

# tail logs of a lambda function
awslogs get /aws/lambda/some-service --watch

# print all logs containg "error"
awslogs get staging-cluster --watch --filter-pattern="error"

# print all logs *not* containg "error"
awslogs get staging-cluster --watch --filter-pattern="-error"

See project readme for more information on using awslogs.

ChillarAnand
  • 22,858
  • 8
  • 98
  • 114
  • This Python utility uses better time ranges and provides CloudWatch Insights queries: https://pypi.org/project/qaws/ – Jurass May 25 '20 at 05:59
0

The aws cli does not provide a live tail -f option.

Those other tools mentioned above do provide a tailing feature, however, I tried all these tools, awslogs, cwtail and found them frustrating. They were slow to download events, often unreliable and not helpful in displaying JSON log data and were primitive with query options.

I wanted an extremely fast, simple log viewer that would allow me to instantly and easily see application errors and status. The CloudWatch logs viewer is slow and CloudWatch Insights can take > 1m for some pretty basic queries.

So I created SenseLogs, a free AWS CloudWatch Logs viewer that runs entirely in your browser. There is no server-side services required. SenseLogs transparently downloads log data and stores events in your browser application cache for immediate viewing, smooth infinite scrolling and full text queries. SenseLogs has live tail with infinite back scrolling. See https://github.com/sensedeep/senselogs/blob/master/README.md for details.

SenseDeep
  • 2,925
  • 2
  • 15
  • 17
  • The unreleased aws cli v2 now does include a live `tail -f` option, although you would specifically have to install v2. See my answer https://stackoverflow.com/a/56959236/149416 – Anton I. Sipos Jul 09 '19 at 19:15
0

Here's a bash script that you can use. The script requires the AWS CLI and jq.

#!/bin/bash

# Bail out if anything fails, or if we do not have the required variables set
set -o errexit -o nounset

LOG_GROUP_NAME=$1
LOG_BEGIN=$(date --date "${2-now}" +%s)
LOG_END=$(date --date "${3-2 minutes}" +%s)
LOG_INTERVAL=5
LOG_EVENTIDS='[]'

while (( $(date +%s) < $LOG_END + $LOG_INTERVAL )); do
  sleep $LOG_INTERVAL
  LOG_EVENTS=$(aws logs filter-log-events --log-group-name $LOG_GROUP_NAME --start-time "${LOG_BEGIN}000" --end-time "${LOG_END}000" --output json)
  echo "$LOG_EVENTS" | jq -rM --argjson eventIds "$LOG_EVENTIDS" '.events[] as $event | select($eventIds | contains([$event.eventId]) | not) | $event | "\(.timestamp / 1000 | todateiso8601) \(.message)"'
  LOG_EVENTIDS=$(echo "$LOG_EVENTS" | jq -crM --argjson eventIds "$LOG_EVENTIDS" '$eventIds + [.events[].eventId] | unique')
done

Usage: save the file, chmod +x it, and then run it: ./cloudwatch-logs-tail.sh log-group-name. The script also takes parameters for begin and end times, which default to now and 2 minutes respectively. You can specify any strings which can be parsed by date --date for these parameters.

How it works: the script keeps a list of event IDs that have been displayed, which is empty to begin with. It queries CloudWatch Logs to get all log entries in the specified time interval, and displays those which do not match our list of event IDs. The it saves all of the event IDs for the next iteration.

The script polls every few seconds (set by LOG_INTERVAL in the script), and keeps polling for one more interval past the end time to account for the delay between log ingestion and availability.

Note that this script is not going to be great if you want to keep tailing the logs for more than a few minutes at a time, because the query results that it gets from AWS will keep getting bigger with every added log item. It's fine for quick runs though.

Nikhil Dabas
  • 2,263
  • 2
  • 16
  • 18
-1

This is not currently a feature of the CLI since it just exposes the HTTP API for CloudWatch Logs. You could fairly trivially emulate the functionality with a shell script:

#! /bin/sh

end_time=$(($(date +"%s") * 1000))
aws logs get-log-events --log-group-name groupName --log-stream-name streamName --end-time $end_time

while :
do
    start_time=$end_time
    end_time=$(($(date +"%s") * 1000))
    aws logs get-log-events --log-group-name groupName --log-stream-name streamName --start-time $start_time --end-time $end_time
    sleep 1
done

Disclaimer: this won't work on Windows, and there may be a better way to get the time in milliseconds.

Jordon Phillips
  • 11,056
  • 3
  • 31
  • 39
  • Thank you for your answer. It helped but this work for me since server time and my local machine time is different. i tried changing my local time but still it wont sync properly. – LynAs Dec 02 '15 at 06:26
  • CloudWatch logs are stored with the timezone, and the CloudWatch API uses of UTC for timestamps (UNIX epoch in milliseconds), so this will only get events in the past if you're system uses a timezone east of GMT (and nothing if you're west of GMT). Also, CloudWatch logs are almost always delayed by a couple seconds, so the likelihood this will return events even if you correct the time to UTC is pretty low (in my experience). – Travis Warlick Jun 27 '16 at 12:32