Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't generate output without metrics #51

Open
Dunedan opened this issue Aug 27, 2020 · 10 comments · May be fixed by #66
Open

Don't generate output without metrics #51

Dunedan opened this issue Aug 27, 2020 · 10 comments · May be fixed by #66
Labels
enhancement New feature or request

Comments

@Dunedan
Copy link
Contributor

Dunedan commented Aug 27, 2020

When initializing a metrics context, but not emitting metrics, aws-embedded-metrics still generates output.

Here is a minimal example:

#!/usr/bin/env python3

import os
os.environ["AWS_LAMBDA_FUNCTION_NAME"] = "dummy-function-name"

from aws_embedded_metrics import metric_scope

@metric_scope
def foo(metrics):
    pass

foo()

I'd expect no output from this script, however this is what I get (pretty-printed for better readabillity):

{
    "LogGroup": "dummy-function-name",
    "ServiceName": "dummy-function-name",
    "ServiceType": "AWS::Lambda::Function",
    "executionEnvironment": "",
    "memorySize": "",
    "functionVersion": "",
    "logStreamId": "",
    "_aws": {
        "Timestamp": 1598537498307,
        "CloudWatchMetrics": [
            {
                "Dimensions": [
                    [
                        "LogGroup",
                        "ServiceName",
                        "ServiceType"
                    ]
                ],
                "Metrics": [],
                "Namespace": "aws-embedded-metrics"
            }
        ]
    }
}

I believe this behavior is unintended and I see not much benefit in such output without actual metrics.

@Dunedan
Copy link
Contributor Author

Dunedan commented Aug 27, 2020

There is apparently a similar issue for the NodeJS library: awslabs/aws-embedded-metrics-node#50

@jaredcnance jaredcnance added the enhancement New feature or request label Aug 27, 2020
@02strich
Copy link
Contributor

With 1.0.6 this should be changed for the stdout sink, i.e. it should no longer log out "empty" lines

@Jeff17Robbins
Copy link

Jeff17Robbins commented Jan 27, 2021

I seem to be getting spurious records using 1.0.6. For example

{
    "LogGroup": "engine3",
    "ServiceName": "engine3",
    "ServiceType": "AWS::Lambda::Function",
    "executionEnvironment": "AWS_Lambda_python3.8",
    "memorySize": "2048",
    "functionVersion": "1471",
    "logStreamId": "2021/01/27/[1471]71c7b6cf21554f0cb0f69284de60afb7",
    "_aws": {
        "Timestamp": 1611779104214,
        "CloudWatchMetrics": [
            {
                "Dimensions": [
                    [
                        "LogGroup",
                        "ServiceName",
                        "ServiceType"
                    ]
                ],
                "Metrics": [],
                "Namespace": "aws-embedded-metrics"
            }
        ]
    }
}

@jaredcnance
Copy link
Member

@Jeff17Robbins can you provide more context?

@Jeff17Robbins
Copy link

Absolutely -- in the midst of another test to try to narrow down the circumstances under which I am getting these "null" records.

@Jeff17Robbins
Copy link

Jeff17Robbins commented Jan 27, 2021

I found the code that was generating the spurious "null" records:

The commented out lines prevent the spurious records. During the run of my test that generated those records, at no point did my code enter the region guarded by if TAG_HELLO_SLACK in dataDict, so I am assuming that simply decorating the class method Write with @metric_scope was sufficient to cause the issue. I will try to produce a standalone reproduction later tonight or tomorrow.

# @metric_scope
def Write(self, dataDict):  # , metrics):
    self._log("(Write) Entering Function", LOG_ALL_EVENTS)

    # if TAG_HELLO_SLACK in dataDict:
    #     val = dataDict["helloSlack"].Value
    #     self.logger.debug(f"{val=} {type(val)=}")
    #     metrics.set_namespace("PeriopManager")
    #     metrics.put_dimensions({"OperationalStats": "engine"})
    #     metrics.put_metric("helloSlack", val, "Seconds")

@Jeff17Robbins
Copy link

Jeff17Robbins commented Jan 28, 2021

@jaredcnance I reproduced the problem:

This lambda code (Python 3.8) with a layer containing aws-embedded-metrics==1.0.6

import json
from aws_embedded_metrics import metric_scope

class Test:
    @metric_scope
    def Write(self, metrics):
        print("hi from Write")

def lambda_handler(event, context):
    t = Test()
    t.Write()
    
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

produces this log when an event triggers it

START RequestId: 23d4fdd1-8ff1-42bb-8109-2b0cdf39e76b Version: $LATEST
--
hi from Write
{     "LogGroup": "metric-test",     "ServiceName": "metric-test",     "ServiceType": "AWS::Lambda::Function",     "executionEnvironment": "AWS_Lambda_python3.8",     "memorySize": "128",     "functionVersion": "$LATEST",     "logStreamId": "2021/01/28/[$LATEST]e84d4b6ba5544067b5f0977bcd31338f",     "_aws": {         "Timestamp": 1611792800292,         "CloudWatchMetrics": [             {                 "Dimensions": [                     [                         "LogGroup",                         "ServiceName",                         "ServiceType"                     ]                 ],                 "Metrics": [],                 "Namespace": "aws-embedded-metrics"             }         ]     } }
END RequestId: 23d4fdd1-8ff1-42bb-8109-2b0cdf39e76b
REPORT RequestId: 23d4fdd1-8ff1-42bb-8109-2b0cdf39e76b	Duration: 6.43 ms	Billed Duration: 7 ms	Memory Size: 128 MB	Max Memory Used: 70 MB	Init Duration: 502.97 ms

Simply invoking the decorated Write function shouldn't emit an empty LogGroup, or so I would have thought.

@jaredcnance
Copy link
Member

jaredcnance commented Jan 28, 2021

I want to get a fix for this in before #65. I will try to spend some time on this later today. @Jeff17Robbins, this was not an issue before 1.0.6?

@Jeff17Robbins
Copy link

@jaredcnance I am a new user of this package as of 1.0.6, so unfortunately I don't know.

It is easy to workaround this problem by only decorating a log helper function, which you only call if you have some metric(s) to log. I stumbled on the problem by decorating a function that gets called whether or not there's something to log.

I'm assuming some cleanup in the decorator doesn't realize that there's nothing to "flush"?

@jaredcnance
Copy link
Member

@Dunedan @Jeff17Robbins please take a look at #66 and let me know if this fits your needs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants