TL;DR – it stops very abruptly so make sure you monitor the Lambda Errors CloudWatch metrics.

I’d heard a rumour that when a Lambda ran out of RAM, it didn’t log a message or metric, which didn’t sound right, so I thought I’d try it out and find out.

I wrote a Lambda which had a RAM limit of 128MB and then purposefully exceeded it and deployed it. https://github.com/a-h/oom

Then, I executed it via the AWS CLI.

Next, I checked the Lambda dashboard and I could clearly see the failed invocation, showing that a CloudWatch metric was available.

Lambda Dashboard Display

Screen Shot 2018-03-13 at 20.14.42

CloudWatch Metrics

Screen Shot 2018-03-14 at 08.36.35

## Execution Logs

The first time I ran the Lambda, I’d forgotten to increase the maximum execution time from the Serverless Framework’s 6 seconds. Even though it had used up a little more than the 128MB of RAM I’d allocated, it was clearly logged as a timeout.

Once I increased that number, it was clear in the log entries that Process exited before completing request is the log entry associated with running out of RAM, while Task timed out after 6.00 seconds is written after a timeout.

If the difference between the failure reasons is important, it would be easy enough to write a CloudWatch log extractor to be able to see the difference between out-of-memory and timeout execution failures.

The 2018/03/13 20:31:51 unexpected EOF wasn’t written by my Lambda, it looks like it’s part of the Go Lambda runtime, since it uses the default console log format for Go’s log package.