Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: newrelic/newrelic-python-agent
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v10.4.0
Choose a base ref
...
head repository: newrelic/newrelic-python-agent
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v10.5.0
Choose a head ref

Commits on Dec 13, 2024

  1. Disable Mergify Queue (#1272)

    * ci(mergify): upgrade configuration to current format
    
    * [Mega-Linter] Apply linters fixes
    
    * Remove merge queue from mergify bot
    
    * Restore deleted comment
    
    ---------
    
    Co-authored-by: Mergify <37929162+mergify[bot]@users.noreply.github.com>
    Co-authored-by: mergify[bot] <mergify[bot]@users.noreply.github.com>
    3 people authored Dec 13, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    57ba3fb View commit details

Commits on Dec 16, 2024

  1. Chore - Update Linter Configs (#1273)

    * Add more linter configuration
    
    * Disable flynt fstrings in setup.py
    
    * Run flynt and other linters
    
    * Split pylint config over multiple lines
    
    * Add setup.py disable for flynt
    TimPansino authored Dec 16, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    25e1f1e View commit details

Commits on Dec 24, 2024

  1. Add Tablestore vectorstore to list

    lrafeei committed Dec 24, 2024
    Copy the full SHA
    7dc8c24 View commit details

Commits on Dec 28, 2024

  1. Merge pull request #1278 from newrelic/add-tablestore-vectorstore

    Add Tablestore VectorStore
    hmstepanek authored Dec 28, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    4c06d26 View commit details

Commits on Jan 6, 2025

  1. Add auto-layer-releases flow to deploy flow

    Move auto-layer-releases flow from lambda to GHA.
    hmstepanek committed Jan 6, 2025

    Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    830f54c View commit details
  2. Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    3e8794e View commit details
  3. Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    4142cf3 View commit details
  4. Convert to gh cli

    hmstepanek committed Jan 6, 2025

    Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    f16b700 View commit details
  5. Fixup: lint errors

    hmstepanek committed Jan 6, 2025

    Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    6833add View commit details
  6. Update to latest cli version

    hmstepanek committed Jan 6, 2025

    Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    9ac223d View commit details

Commits on Jan 7, 2025

  1. Use pre-installed gh

    hmstepanek committed Jan 7, 2025

    Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    0988d57 View commit details

Commits on Jan 8, 2025

  1. Check for ExceptionMiddlware removal in starlette.exceptions (#1282)

    * Add check for import proxy removal
    
    * Fix order of logic
    lrafeei authored Jan 8, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    34e7024 View commit details

Commits on Jan 9, 2025

  1. Update .github/workflows/deploy-python.yml

    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    hmstepanek and TimPansino authored Jan 9, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    3de90db View commit details

Commits on Jan 10, 2025

  1. Merge pull request #1274 from newrelic/add-layers-release-flow-to-gha

    Add auto-layer-releases flow to deploy flow
    hmstepanek authored Jan 10, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    e4ff798 View commit details

Commits on Jan 17, 2025

  1. Merge branch 'main' into develop-tests

    TimPansino authored Jan 17, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    cda28b6 View commit details
  2. Pin sanic versions for older python versions

    TimPansino committed Jan 17, 2025
    Copy the full SHA
    3543569 View commit details
  3. Merge pull request #1286 from newrelic/fix-sanic-testing

    Fix Sanic Testing
    umaannamalai authored Jan 17, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    d13cf32 View commit details

Commits on Jan 18, 2025

  1. Fix Langchain Tests (#1287)

    * Fix issue with document ID changing content of request
    
    * Add temperature explicitly as an argument to openai client
    
    ---------
    
    Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
    TimPansino and mergify[bot] authored Jan 18, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    415dc73 View commit details
  2. Merge pull request #1285 from newrelic/develop-tests

    Merge develop-tests into main
    TimPansino authored Jan 18, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    9dd76a8 View commit details

Commits on Jan 24, 2025

  1. Add metadata.page_label to test (#1290)

    lrafeei authored Jan 24, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    454709e View commit details

Commits on Jan 28, 2025

  1. Add quotes to release tags workflow

    hmstepanek committed Jan 28, 2025

    Unverified

    This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
    Copy the full SHA
    9bae55a View commit details
  2. Merge pull request #1292 from newrelic/add-quotes

    Add quotes to release tags workflow
    hmstepanek authored Jan 28, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    58363f2 View commit details

Commits on Jan 29, 2025

  1. Structlog processor formatter (#1289)

    * Initial commit
    
    * Add local_log_decoration tests
    
    * Add tests for processor formatter
    
    * Add try-except clauses
    
    ---------
    
    Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
    lrafeei and mergify[bot] authored Jan 29, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    84a5d46 View commit details
  2. fix(NewRelicContextFormatter): add cause and context to stack traces (#…

    …1266)
    
    Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    3 people authored Jan 29, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    019b019 View commit details
  3. cassandra-driver Instrumentation (#1280)

    * Add cassandradriver to tox
    
    * Add initial cassandra instrumentation
    
    * Add cassandra testing
    
    * Refined instrumentation and tests
    
    * Cover multiple async reactors in tests
    
    * Separate async and sync tests
    
    * Provide cluster options as fixture
    
    * Add ORM Model tests for cqlengine
    
    * Add cassandra runner to Github Actions
    
    * Adjust cassandra tox env list
    
    * Add skip for pypy libev tests
    
    * Format trivy
    
    * Address suggestions from code review
    
    ---------
    
    Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
    TimPansino and mergify[bot] authored Jan 29, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    82918c5 View commit details
  4. Add Vectorstore instrumentation automation (#1279)

    * Tweak instrumentation & add automation script
    
    * Remove commented out code
    
    * Move script and add description
    
    * Remove breakpoint
    
    ---------
    
    Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    3 people authored Jan 29, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    fb171c0 View commit details
  5. Add Agent Control Health Checks (#1294)

    * Initial commit.
    
    * Add status change logic.
    
    * Checkpoint.
    
    * Remove unused arg.
    
    * Clear configuration leaking from test_configuration.py.
    
    * Remove log statement when fleet ID is not found.
    
    * Add support testing and support for new status codes.
    
    * Update thread names to use NR-Control.
    
    * Refactoring.
    
    * Address review feedback.
    
    * Capture urlparse logic in try except.
    
    * Add additional shutdown check in shutdown_agent.
    
    * Refactor status code and message usage in superagent
    
    * Update regex assertion
    
    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    
    * Add super agent supportability metric.
    
    * Add newline.
    
    * Rename super agent to agent control.
    
    * Fix supportability metric test.
    
    * Update agent control config.
    
    * Add explicit check for empty delivery location.
    
    * Change delivery location to property on health check class.
    
    * Initial commit.
    
    * Add status change logic.
    
    * Checkpoint.
    
    * Remove unused arg.
    
    * Clear configuration leaking from test_configuration.py.
    
    * Remove log statement when fleet ID is not found.
    
    * Add support testing and support for new status codes.
    
    * Update thread names to use NR-Control.
    
    * Refactoring.
    
    * Address review feedback.
    
    * Capture urlparse logic in try except.
    
    * Add additional shutdown check in shutdown_agent.
    
    * Refactor status code and message usage in superagent
    
    * Update regex assertion
    
    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    
    * Add super agent supportability metric.
    
    * Add newline.
    
    * Rename super agent to agent control.
    
    * Fix supportability metric test.
    
    * Use environ_as_bool/ int.
    
    * Add max 3 app names unhealthy status code.
    
    * Switch order of tests to avoid config collisions.
    
    * Reset app name.
    
    * Reuse app name list variable.
    
    * Fix line spacing.
    
    * [Mega-Linter] Apply linters fixes
    
    * Initial commit.
    
    * Add status change logic.
    
    * Checkpoint.
    
    * Remove unused arg.
    
    * Clear configuration leaking from test_configuration.py.
    
    * Remove log statement when fleet ID is not found.
    
    * Add support testing and support for new status codes.
    
    * Update thread names to use NR-Control.
    
    * Refactoring.
    
    * Address review feedback.
    
    * Capture urlparse logic in try except.
    
    * Add additional shutdown check in shutdown_agent.
    
    * Refactor status code and message usage in superagent
    
    * Update regex assertion
    
    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    
    * Add super agent supportability metric.
    
    * Add newline.
    
    * Rename super agent to agent control.
    
    * Fix supportability metric test.
    
    * Update agent control config.
    
    * Add explicit check for empty delivery location.
    
    * Change delivery location to property on health check class.
    
    * Initial commit.
    
    * Add status change logic.
    
    * Checkpoint.
    
    * Remove unused arg.
    
    * Clear configuration leaking from test_configuration.py.
    
    * Remove log statement when fleet ID is not found.
    
    * Add support testing and support for new status codes.
    
    * Update thread names to use NR-Control.
    
    * Refactoring.
    
    * Address review feedback.
    
    * Capture urlparse logic in try except.
    
    * Add additional shutdown check in shutdown_agent.
    
    * Refactor status code and message usage in superagent
    
    * Update regex assertion
    
    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    
    * Rename super agent to agent control.
    
    * Use environ_as_bool/ int.
    
    * Add license header to HTTP client recorder.
    
    ---------
    
    Co-authored-by: Tim Pansino <timpansino@gmail.com>
    Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com>
    Co-authored-by: umaannamalai <umaannamalai@users.noreply.github.com>
    Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
    5 people authored Jan 29, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    1e87217 View commit details
  6. Update deploy action versions (#1295)

    TimPansino authored Jan 29, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    2431215 View commit details
  7. Disable fail-fast on deploy jobs

    TimPansino committed Jan 29, 2025
    Copy the full SHA
    b95ea5e View commit details
  8. Pin ubuntu 22 instead of ubuntu-latest

    TimPansino committed Jan 29, 2025
    Copy the full SHA
    35e23fa View commit details

Commits on Jan 30, 2025

  1. Merge pull request #1296 from newrelic/update-cicd-actions

    Rollback to Ubuntu-22 Runners
    hmstepanek authored Jan 30, 2025

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    237897a View commit details
Showing with 1,897 additions and 184 deletions.
  1. +0 −28 .github/mergify.yml
  2. +1 −1 .github/workflows/build-ci-image.yml
  3. +22 −7 .github/workflows/deploy-python.yml
  4. +1 −1 .github/workflows/mega-linter.yml
  5. +113 −45 .github/workflows/tests.yml
  6. +1 −0 .gitignore
  7. +1 −3 newrelic/api/llm_custom_attributes.py
  8. +2 −2 newrelic/api/log.py
  9. +74 −25 newrelic/config.py
  10. +3 −0 newrelic/console.py
  11. +9 −0 newrelic/core/agent.py
  12. +239 −0 newrelic/core/agent_control_health.py
  13. +27 −0 newrelic/core/agent_protocol.py
  14. +25 −3 newrelic/core/application.py
  15. +119 −0 newrelic/hooks/datastore_cassandradriver.py
  16. +3 −1 newrelic/hooks/framework_starlette.py
  17. +16 −0 newrelic/hooks/logger_structlog.py
  18. +4 −5 newrelic/hooks/mlmodel_langchain.py
  19. +45 −1 pyproject.toml
  20. +8 −3 setup.py
  21. +256 −0 tests/agent_features/test_agent_control_health_check.py
  22. +13 −9 tests/agent_features/test_logs_in_context.py
  23. +23 −0 tests/agent_unittests/test_agent_connect.py
  24. +5 −35 tests/agent_unittests/test_agent_protocol.py
  25. +9 −4 tests/cross_agent/test_ecs_data.py
  26. +91 −0 tests/datastore_cassandradriver/conftest.py
  27. +172 −0 tests/datastore_cassandradriver/test_cassandra.py
  28. +144 −0 tests/datastore_cassandradriver/test_cqlengine.py
  29. +1 −1 tests/datastore_motor/test_collection.py
  30. +1 −1 tests/datastore_pymongo/test_async_collection.py
  31. +1 −1 tests/datastore_pymongo/test_collection.py
  32. +273 −0 tests/logger_structlog/test_processor_formatter.py
  33. +1 −0 tests/mlmodel_langchain/conftest.py
  34. +100 −0 tests/mlmodel_langchain/new_vectorstore_adder.py
  35. +3 −3 tests/mlmodel_langchain/test_chain.py
  36. +2 −1 tests/mlmodel_langchain/test_vectorstore.py
  37. +27 −3 tests/testing_support/db_settings.py
  38. +55 −0 tests/testing_support/http_client_recorder.py
  39. +7 −1 tox.ini
28 changes: 0 additions & 28 deletions .github/mergify.yml
Original file line number Diff line number Diff line change
@@ -1,37 +1,9 @@
# For condition grammar see: https://docs.mergify.com/conditions/#grammar

shared:
conditions:
- and: &pr_ready_checks
- "#approved-reviews-by>=1" # A '#' pulls the length of the underlying list
- "label=ready-to-merge"
- "check-success=tests"
- "-draft" # Don't include draft PRs
- "-merged"
- or: # Only handle branches that target main or develop branches
- "base=main"
- "base~=^develop"

queue_rules:
- name: default
conditions:
- and: *pr_ready_checks
merge_method: squash

pull_request_rules:
# Merge Queue PR Rules
- name: Regular PRs - Add to merge queue on approval (squash)
conditions:
- and: *pr_ready_checks
- "-head~=^develop" # Don't include PRs from develop branches
actions:
queue:
method: squash

# Automatic PR Updates
- name: Automatic PR branch updates
conditions:
- "queue-position=-1" # Not queued
- "-draft" # Don't include draft PRs
- "-merged"
actions:
2 changes: 1 addition & 1 deletion .github/workflows/build-ci-image.yml
Original file line number Diff line number Diff line change
@@ -23,7 +23,7 @@ concurrency:

jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04

steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
29 changes: 22 additions & 7 deletions .github/workflows/deploy-python.yml
Original file line number Diff line number Diff line change
@@ -21,8 +21,9 @@ on:

jobs:
build-linux-py3-legacy:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
wheel:
- cp37-manylinux
@@ -35,7 +36,7 @@ jobs:
fetch-depth: 0

- name: Setup QEMU
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3 # 3.0.0
uses: docker/setup-qemu-action@53851d14592bedcffcf25ea515637cff71ef929a # 3.3.0

- name: Build Wheels
uses: pypa/cibuildwheel@8d945475ac4b1aac4ae08b2fd27db9917158b6ce # 2.17.0
@@ -55,8 +56,9 @@ jobs:
retention-days: 1

build-linux-py3:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
wheel:
- cp38-manylinux
@@ -79,10 +81,10 @@ jobs:
fetch-depth: 0

- name: Setup QEMU
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3 # 3.0.0
uses: docker/setup-qemu-action@53851d14592bedcffcf25ea515637cff71ef929a # 3.3.0

- name: Build Wheels
uses: pypa/cibuildwheel@d4a2945fcc8d13f20a1b99d461b8e844d5fc6e23 # 2.21.1
uses: pypa/cibuildwheel@ee63bf16da6cddfb925f542f2c7b59ad50e93969 # 2.22.0
env:
CIBW_PLATFORM: linux
CIBW_BUILD: "${{ matrix.wheel }}*"
@@ -99,7 +101,7 @@ jobs:
retention-days: 1

build-sdist:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
with:
@@ -131,7 +133,7 @@ jobs:
retention-days: 1

deploy:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04

needs:
- build-linux-py3-legacy
@@ -182,3 +184,16 @@ jobs:
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}

- name: Create release tags for Lambda and K8s Init Containers
run: |
RELEASE_TITLE="New Relic Python Agent ${GITHUB_REF}.0"
RELEASE_TAG="${GITHUB_REF}.0_python"
RELEASE_NOTES="Automated release for [Python Agent ${GITHUB_REF}](https://github.com/newrelic/newrelic-python-agent/releases/tag/${GITHUB_REF})"
gh auth login --with-token <<< $GH_RELEASE_TOKEN
echo "newrelic/newrelic-lambda-layers - Releasing \"${RELEASE_TITLE}\" with tag ${RELEASE_TAG}"
gh release create "${RELEASE_TAG}" --title="${RELEASE_TITLE}" --repo=newrelic/newrelic-lambda-layers --notes="${RELEASE_NOTES}"
echo "newrelic/newrelic-agent-init-container - Releasing \"${RELEASE_TITLE}\" with tag ${RELEASE_TAG}"
gh release create "${RELEASE_TAG}" --title="${RELEASE_TITLE}" --repo=newrelic/newrelic-agent-init-container --notes="${RELEASE_NOTES}"
env:
GH_RELEASE_TOKEN: ${{ secrets.GH_RELEASE_TOKEN }}
2 changes: 1 addition & 1 deletion .github/workflows/mega-linter.yml
Original file line number Diff line number Diff line change
@@ -21,7 +21,7 @@ concurrency:
jobs:
build:
name: Mega-Linter
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
# Git Checkout
- name: Checkout Code
158 changes: 113 additions & 45 deletions .github/workflows/tests.yml

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -3,6 +3,7 @@

# Linter
megalinter-reports/
.pre-commit-config.yaml

# Byte-compiled / optimized / DLL files
__pycache__/
4 changes: 1 addition & 3 deletions newrelic/api/llm_custom_attributes.py
Original file line number Diff line number Diff line change
@@ -23,9 +23,7 @@ class WithLlmCustomAttributes(object):
def __init__(self, custom_attr_dict):
transaction = current_transaction()
if not custom_attr_dict or not isinstance(custom_attr_dict, dict):
raise TypeError(
"custom_attr_dict must be a non-empty dictionary. Received type: %s" % type(custom_attr_dict)
)
raise TypeError(f"custom_attr_dict must be a non-empty dictionary. Received type: {type(custom_attr_dict)}")

# Add "llm." prefix to all keys in attribute dictionary
context_attrs = {k if k.startswith("llm.") else f"llm.{k}": v for k, v in custom_attr_dict.items()}
4 changes: 2 additions & 2 deletions newrelic/api/log.py
Original file line number Diff line number Diff line change
@@ -16,7 +16,7 @@
import logging
import re
import warnings
from traceback import format_tb
from traceback import format_exception

from newrelic.api.application import application_instance
from newrelic.api.time_trace import get_linking_metadata
@@ -87,7 +87,7 @@ def format_exc_info(cls, exc_info, stack_trace_limit=0):

if stack_trace_limit is None or stack_trace_limit > 0:
if exc_info[2] is not None:
stack_trace = "".join(format_tb(exc_info[2], limit=stack_trace_limit)) or None
stack_trace = "".join(format_exception(*exc_info, limit=stack_trace_limit)) or None
else:
stack_trace = None
formatted["error.stack_trace"] = stack_trace
99 changes: 74 additions & 25 deletions newrelic/config.py
Original file line number Diff line number Diff line change
@@ -17,6 +17,8 @@
import logging
import os
import sys
import threading
import time
import traceback

import newrelic.api.application
@@ -40,12 +42,19 @@
from newrelic.common.log_file import initialize_logging
from newrelic.common.object_names import callable_name, expand_builtin_exception_name
from newrelic.core import trace_cache
from newrelic.core.agent_control_health import (
HealthStatus,
agent_control_health_instance,
agent_control_healthcheck_loop,
)
from newrelic.core.config import (
Settings,
apply_config_setting,
default_host,
fetch_config_setting,
)
from newrelic.core.agent_control_health import HealthStatus, agent_control_health_instance, agent_control_healthcheck_loop


__all__ = ["initialize", "filter_app_factory"]

@@ -100,6 +109,7 @@ def _map_aws_account_id(s):
# all the settings have been read.

_cache_object = []
agent_control_health = agent_control_health_instance()


def _reset_config_parser():
@@ -592,12 +602,16 @@ def _process_app_name_setting():
# primary application name and link it with the other applications.
# When activating the application the linked names will be sent
# along to the core application where the association will be
# created if the do not exist.
# created if it does not exist.

app_name_list = _settings.app_name.split(";")
name = app_name_list[0].strip() or "Python Application"

name = _settings.app_name.split(";")[0].strip() or "Python Application"
if len(app_name_list) > 3:
agent_control_health.set_health_status(HealthStatus.MAX_APP_NAME.value)

linked = []
for altname in _settings.app_name.split(";")[1:]:
for altname in app_name_list[1:]:
altname = altname.strip()
if altname:
linked.append(altname)
@@ -1033,21 +1047,25 @@ def _load_configuration(

# Now read in the configuration file. Cache the config file
# name in internal settings object as indication of succeeding.
if config_file.endswith(".toml"):
try:
import tomllib
except ImportError:
raise newrelic.api.exceptions.ConfigurationError(
"TOML configuration file can only be used if tomllib is available (Python 3.11+)."
)
with open(config_file, "rb") as f:
content = tomllib.load(f)
newrelic_section = content.get("tool", {}).get("newrelic")
if not newrelic_section:
raise newrelic.api.exceptions.ConfigurationError("New Relic configuration not found in TOML file.")
_config_object.read_dict(_toml_config_to_configparser_dict(newrelic_section))
elif not _config_object.read([config_file]):
raise newrelic.api.exceptions.ConfigurationError(f"Unable to open configuration file {config_file}.")
try:
if config_file.endswith(".toml"):
try:
import tomllib
except ImportError:
raise newrelic.api.exceptions.ConfigurationError(
"TOML configuration file can only be used if tomllib is available (Python 3.11+)."
)
with open(config_file, "rb") as f:
content = tomllib.load(f)
newrelic_section = content.get("tool", {}).get("newrelic")
if not newrelic_section:
raise newrelic.api.exceptions.ConfigurationError("New Relic configuration not found in TOML file.")
_config_object.read_dict(_toml_config_to_configparser_dict(newrelic_section))
elif not _config_object.read([config_file]):
raise newrelic.api.exceptions.ConfigurationError(f"Unable to open configuration file {config_file}.")
except Exception:
agent_control_health.set_health_status(HealthStatus.INVALID_CONFIG.value)
raise

_settings.config_file = config_file

@@ -2162,13 +2180,10 @@ def _process_module_builtin_defaults():
"newrelic.hooks.mlmodel_langchain",
"instrument_langchain_callbacks_manager",
)

# VectorStores with similarity_search method
_process_module_definition(
"langchain_community.vectorstores.docarray.hnsw",
"newrelic.hooks.mlmodel_langchain",
"instrument_langchain_vectorstore_similarity_search",
)
_process_module_definition(
"langchain_community.vectorstores.docarray.in_memory",
"langchain_community.vectorstores.docarray",
"newrelic.hooks.mlmodel_langchain",
"instrument_langchain_vectorstore_similarity_search",
)
@@ -2178,7 +2193,7 @@ def _process_module_builtin_defaults():
"instrument_langchain_vectorstore_similarity_search",
)
_process_module_definition(
"langchain_community.vectorstores.redis.base",
"langchain_community.vectorstores.redis",
"newrelic.hooks.mlmodel_langchain",
"instrument_langchain_vectorstore_similarity_search",
)
@@ -2719,6 +2734,12 @@ def _process_module_builtin_defaults():
"instrument_langchain_vectorstore_similarity_search",
)

_process_module_definition(
"langchain_community.vectorstores.tablestore",
"newrelic.hooks.mlmodel_langchain",
"instrument_langchain_vectorstore_similarity_search",
)

_process_module_definition(
"langchain_core.tools",
"newrelic.hooks.mlmodel_langchain",
@@ -3158,6 +3179,11 @@ def _process_module_builtin_defaults():
"instrument_gunicorn_app_base",
)

_process_module_definition("cassandra", "newrelic.hooks.datastore_cassandradriver", "instrument_cassandra")
_process_module_definition(
"cassandra.cluster", "newrelic.hooks.datastore_cassandradriver", "instrument_cassandra_cluster"
)

_process_module_definition("cx_Oracle", "newrelic.hooks.database_cx_oracle", "instrument_cx_oracle")

_process_module_definition("ibm_db_dbi", "newrelic.hooks.database_ibm_db_dbi", "instrument_ibm_db_dbi")
@@ -4818,13 +4844,29 @@ def _setup_agent_console():
newrelic.core.agent.Agent.run_on_startup(_startup_agent_console)


agent_control_health_thread = threading.Thread(
name="Agent-Control-Health-Main-Thread", target=agent_control_healthcheck_loop
)
agent_control_health_thread.daemon = True


def _setup_agent_control_health():
if agent_control_health_thread.is_alive():
return

if agent_control_health.health_check_enabled:
agent_control_health_thread.start()


def initialize(
config_file=None,
environment=None,
ignore_errors=None,
log_file=None,
log_level=None,
):
agent_control_health.start_time_unix_nano = time.time_ns()

if config_file is None:
config_file = os.environ.get("NEW_RELIC_CONFIG_FILE", None)

@@ -4836,6 +4878,12 @@ def initialize(

_load_configuration(config_file, environment, ignore_errors, log_file, log_level)

_setup_agent_control_health()

if _settings.monitor_mode:
if not _settings.license_key:
agent_control_health.set_health_status(HealthStatus.MISSING_LICENSE.value)

if _settings.monitor_mode or _settings.developer_mode:
_settings.enabled = True
_setup_instrumentation()
@@ -4844,6 +4892,7 @@ def initialize(
_setup_agent_console()
else:
_settings.enabled = False
agent_control_health.set_health_status(HealthStatus.AGENT_DISABLED.value)


def filter_app_factory(app, global_conf, config_file, environment=None):
3 changes: 3 additions & 0 deletions newrelic/console.py
Original file line number Diff line number Diff line change
@@ -36,6 +36,7 @@
from newrelic.common.object_wrapper import ObjectProxy
from newrelic.core.agent import agent_instance
from newrelic.core.config import flatten_settings, global_settings
from newrelic.core.agent_control_health import HealthStatus, agent_control_health_instance
from newrelic.core.trace_cache import trace_cache


@@ -512,6 +513,8 @@ def __init__(self, config_file, stdin=None, stdout=None, log=None):
self.__log_object = log

if not self.__config_object.read([config_file]):
agent_control_instance = agent_control_health_instance()
agent_control_instance.set_health_status(HealthStatus.INVALID_CONFIG.value)
raise RuntimeError(f"Unable to open configuration file {config_file}.")

listener_socket = self.__config_object.get("newrelic", "console.listener_socket") % {"pid": "*"}
9 changes: 9 additions & 0 deletions newrelic/core/agent.py
Original file line number Diff line number Diff line change
@@ -35,6 +35,7 @@
from newrelic.samplers.cpu_usage import cpu_usage_data_source
from newrelic.samplers.gc_data import garbage_collector_data_source
from newrelic.samplers.memory_usage import memory_usage_data_source
from newrelic.core.agent_control_health import HealthStatus, agent_control_health_instance

_logger = logging.getLogger(__name__)

@@ -217,6 +218,9 @@ def __init__(self, config):
self._scheduler = sched.scheduler(self._harvest_timer, self._harvest_shutdown.wait)

self._process_shutdown = False
self._agent_control = agent_control_health_instance()

self._agent_control = agent_control_health_instance()

self._lock = threading.Lock()

@@ -734,6 +738,11 @@ def shutdown_agent(self, timeout=None):
if self._harvest_shutdown_is_set():
return

self._agent_control.set_health_status(HealthStatus.AGENT_SHUTDOWN.value)

if self._agent_control.health_check_enabled:
self._agent_control.write_to_health_file()

if timeout is None:
timeout = self._config.shutdown_timeout

239 changes: 239 additions & 0 deletions newrelic/core/agent_control_health.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,239 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import logging
import os
import sched
import threading
import time
import uuid
from enum import IntEnum
from pathlib import Path
from urllib.parse import urlparse

from newrelic.core.config import _environ_as_bool, _environ_as_int

_logger = logging.getLogger(__name__)


class HealthStatus(IntEnum):
HEALTHY = 0
INVALID_LICENSE = 1
MISSING_LICENSE = 2
FORCED_DISCONNECT = 3
HTTP_ERROR = 4
MAX_APP_NAME = 6
PROXY_ERROR = 7
AGENT_DISABLED = 8
FAILED_NR_CONNECTION = 9
INVALID_CONFIG = 10
AGENT_SHUTDOWN = 99


# Set enum integer values as dict keys to reduce performance impact of string copies
HEALTH_CHECK_STATUSES = {
HealthStatus.HEALTHY.value: "Healthy",
HealthStatus.INVALID_LICENSE.value: "Invalid license key (HTTP status code 401)",
HealthStatus.MISSING_LICENSE.value: "License key missing in configuration",
HealthStatus.FORCED_DISCONNECT.value: "Forced disconnect received from New Relic (HTTP status code 410)",
HealthStatus.HTTP_ERROR.value: "HTTP error response code {response_code} received from New Relic while sending data type {info}",
HealthStatus.MAX_APP_NAME.value: "The maximum number of configured app names (3) exceeded",
HealthStatus.PROXY_ERROR.value: "HTTP Proxy configuration error; response code {response_code}",
HealthStatus.AGENT_DISABLED.value: "Agent is disabled via configuration",
HealthStatus.FAILED_NR_CONNECTION.value: "Failed to connect to New Relic data collector",
HealthStatus.INVALID_CONFIG.value: "Agent config file is not able to be parsed",
HealthStatus.AGENT_SHUTDOWN.value: "Agent has shutdown",
}
UNKNOWN_STATUS_MESSAGE = "Unknown health status code."
HEALTHY_STATUS_MESSAGE = HEALTH_CHECK_STATUSES[HealthStatus.HEALTHY.value] # Assign most used status a symbol

PROTOCOL_ERROR_CODES = frozenset(
[HealthStatus.FORCED_DISCONNECT.value, HealthStatus.HTTP_ERROR.value, HealthStatus.PROXY_ERROR.value]
)
LICENSE_KEY_ERROR_CODES = frozenset([HealthStatus.INVALID_LICENSE.value, HealthStatus.MISSING_LICENSE.value])

NR_CONNECTION_ERROR_CODES = frozenset([HealthStatus.FAILED_NR_CONNECTION.value, HealthStatus.FORCED_DISCONNECT.value])


def is_valid_file_delivery_location(file_uri):
# Verify whether file directory provided to agent via env var is a valid file URI to determine whether health
# check should run
try:
parsed_uri = urlparse(file_uri)
if not parsed_uri.scheme or not parsed_uri.path:
_logger.warning(
"Configured Agent Control health delivery location is not a complete file URI. Health check will not be "
"enabled. "
)
return False

if parsed_uri.scheme != "file":
_logger.warning(
"Configured Agent Control health delivery location does not have a valid scheme. Health check will not be "
"enabled."
)
return False

path = Path(parsed_uri.path)

# Check if the path exists
if not path.exists():
_logger.warning(
"Configured Agent Control health delivery location does not exist. Health check will not be enabled."
)
return False

return True

except Exception as e:
_logger.warning(
"Configured Agent Control health delivery location is not valid. Health check will not be enabled."
)
return False


class AgentControlHealth:
_instance_lock = threading.Lock()
_instance = None

# Define a way to access/create a single agent control object instance similar to the agent_singleton
@staticmethod
def agent_control_health_singleton():
if AgentControlHealth._instance:
return AgentControlHealth._instance

with AgentControlHealth._instance_lock:
if not AgentControlHealth._instance:
instance = AgentControlHealth()

AgentControlHealth._instance = instance

return AgentControlHealth._instance

def __init__(self):
# Initialize health check with a healthy status that can be updated as issues are encountered
self.status_code = HealthStatus.HEALTHY.value
self.status_message = HEALTHY_STATUS_MESSAGE
self.start_time_unix_nano = None
self.pid_file_id_map = {}

@property
def health_check_enabled(self):
# Default to False - this must be explicitly set to True by the sidecar/ operator to enable health check
agent_control_enabled = _environ_as_bool("NEW_RELIC_AGENT_CONTROL_ENABLED", False)
if not agent_control_enabled:
return False

return is_valid_file_delivery_location(self.health_delivery_location)

@property
def health_delivery_location(self):
# Set a default file path if env var is not set or set to an empty string
health_file_location = (
os.environ.get("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", "") or "file:///newrelic/apm/health"
)

return health_file_location

@property
def is_healthy(self):
return self.status_code == HealthStatus.HEALTHY.value

def set_health_status(self, status_code, response_code=None, info=None):
previous_status_code = self.status_code

if status_code == HealthStatus.FAILED_NR_CONNECTION.value and previous_status_code in LICENSE_KEY_ERROR_CODES:
# Do not update to failed connection status when license key is the issue so the more descriptive status is not overridden
return
elif status_code in NR_CONNECTION_ERROR_CODES and previous_status_code == HealthStatus.MAX_APP_NAME:
# Do not let NR connection error override the max app name status
return
elif status_code == HealthStatus.AGENT_SHUTDOWN.value and not self.is_healthy:
# Do not override status with agent_shutdown unless the agent was previously healthy
return

status_message = HEALTH_CHECK_STATUSES.get(status_code, UNKNOWN_STATUS_MESSAGE)
self.status_message = status_message.format(response_code=response_code, info=info)
self.status_code = status_code

def update_to_healthy_status(self, protocol_error=False, collector_error=False):
# If our unhealthy status code was not config related, it is possible it could be resolved during an active
# session. This function allows us to update to a healthy status if so based on the error type
# Since this function is only called when we are in scenario where the agent functioned as expected, we check to
# see if the previous status was unhealthy so we know to update it
if (
protocol_error
and self.status_code in PROTOCOL_ERROR_CODES
or collector_error
and self.status_code == HealthStatus.FAILED_NR_CONNECTION.value
):
self.status_code = HealthStatus.HEALTHY.value
self.status_message = HEALTHY_STATUS_MESSAGE

def write_to_health_file(self):
status_time_unix_nano = time.time_ns()

try:
file_path = urlparse(self.health_delivery_location).path
file_id = self.get_file_id()
file_name = f"health-{file_id}.yml"
full_path = os.path.join(file_path, file_name)
is_healthy = self.is_healthy

with open(full_path, "w") as f:
f.write(f"healthy: {is_healthy}\n")
f.write(f"status: {self.status_message}\n")
f.write(f"start_time_unix_nano: {self.start_time_unix_nano}\n")
f.write(f"status_time_unix_nano: {status_time_unix_nano}\n")
if not is_healthy:
f.write(f"last_error: NR-APM-{self.status_code:03d}\n")
except Exception:
_logger.warning("Unable to write to agent health file.")

def get_file_id(self):
pid = os.getpid()

# Each file name should have a UUID with hyphens stripped appended to it
file_id = str(uuid.uuid4()).replace("-", "")

# Map the UUID to the process ID to ensure each agent instance has one UUID associated with it
if pid not in self.pid_file_id_map:
self.pid_file_id_map[pid] = file_id
return file_id

return self.pid_file_id_map[pid]


def agent_control_health_instance():
# Helper function directly returns the singleton instance similar to agent_instance()
return AgentControlHealth.agent_control_health_singleton()


def agent_control_healthcheck_loop():
reporting_frequency = _environ_as_int("NEW_RELIC_AGENT_CONTROL_HEALTH_FREQUENCY", 5)
# If we have an invalid integer value for frequency, default back to 5
if reporting_frequency <= 0:
reporting_frequency = 5

scheduler = sched.scheduler(time.time, time.sleep)

# Target this function when starting agent control health check threads to keep the scheduler running
scheduler.enter(reporting_frequency, 1, agent_control_healthcheck, (scheduler, reporting_frequency))
scheduler.run()


def agent_control_healthcheck(scheduler, reporting_frequency):
scheduler.enter(reporting_frequency, 1, agent_control_healthcheck, (scheduler, reporting_frequency))

agent_control_health_instance().write_to_health_file()
27 changes: 27 additions & 0 deletions newrelic/core/agent_protocol.py
Original file line number Diff line number Diff line change
@@ -47,6 +47,7 @@
NetworkInterfaceException,
RetryDataForRequest,
)
from newrelic.core.agent_control_health import HealthStatus, agent_control_health_instance

_logger = logging.getLogger(__name__)

@@ -188,6 +189,7 @@ def __init__(self, settings, host=None, client_cls=ApplicationModeClient):
"marshal_format": "json",
}
self._headers = {}
self._license_key = settings.license_key

# In Python 2, the JSON is loaded with unicode keys and values;
# however, the header name must be a non-unicode value when given to
@@ -209,6 +211,7 @@ def __init__(self, settings, host=None, client_cls=ApplicationModeClient):

# Do not access configuration anywhere inside the class
self.configuration = settings
self.agent_control = agent_control_health_instance()

def __enter__(self):
self.client.__enter__()
@@ -242,7 +245,27 @@ def send(
f"Supportability/Python/Collector/MaxPayloadSizeLimit/{method}",
1,
)
if status == 401:
# Check for license key presence again so the original missing license key status set in the
# initialize function doesn't get overridden with invalid_license as a missing license key is also
# treated as a 401 status code
if not self._license_key:
self.agent_control.set_health_status(HealthStatus.MISSING_LICENSE.value)
else:
self.agent_control.set_health_status(HealthStatus.INVALID_LICENSE.value)

if status == 407:
self.agent_control.set_health_status(HealthStatus.PROXY_ERROR.value, status)

if status == 410:
self.agent_control.set_health_status(HealthStatus.FORCED_DISCONNECT.value)

level, message = self.LOG_MESSAGES.get(status, self.LOG_MESSAGES["default"])

# If the default error message was used, then we know we have a general HTTP error
if message.startswith("Received a non 200 or 202"):
self.agent_control.set_health_status(HealthStatus.HTTP_ERROR.value, status, method)

_logger.log(
level,
message,
@@ -258,9 +281,12 @@ def send(
"agent_run_id": self._run_token,
},
)

exception = self.STATUS_CODE_RESPONSE.get(status, DiscardDataForRequest)
raise exception
if status == 200:
# Check if we previously had a protocol related error and update to a healthy status
self.agent_control.update_to_healthy_status(protocol_error=True)
return self.decode_response(data)

def decode_response(self, response):
@@ -590,6 +616,7 @@ def __init__(self, settings, host=None, client_cls=ApplicationModeClient):

# Do not access configuration anywhere inside the class
self.configuration = settings
self.agent_control = agent_control_health_instance()

@classmethod
def connect(
28 changes: 25 additions & 3 deletions newrelic/core/application.py
Original file line number Diff line number Diff line change
@@ -49,6 +49,7 @@
RetryDataForRequest,
)
from newrelic.samplers.data_sampler import DataSampler
from newrelic.core.agent_control_health import HealthStatus, agent_control_healthcheck_loop, agent_control_health_instance

_logger = logging.getLogger(__name__)

@@ -110,6 +111,11 @@ def __init__(self, app_name, linked_applications=None):

self._remaining_plugins = True

self._agent_control_health_thread = threading.Thread(name="Agent-Control-Health-Session-Thread", target=agent_control_healthcheck_loop)
self._agent_control_health_thread.daemon = True
self._agent_control = agent_control_health_instance()


# We setup empty rules engines here even though they will be
# replaced when application first registered. This is done to
# avoid a race condition in setting it later. Otherwise we have
@@ -195,7 +201,6 @@ def activate_session(self, activate_agent=None, timeout=0.0):
to be activated.
"""

if self._agent_shutdown:
return

@@ -205,6 +210,9 @@ def activate_session(self, activate_agent=None, timeout=0.0):
if self._active_session:
return

if self._agent_control.health_check_enabled and not self._agent_control_health_thread.is_alive():
self._agent_control_health_thread.start()

self._process_id = os.getpid()

self._connected_event.clear()
@@ -225,7 +233,6 @@ def activate_session(self, activate_agent=None, timeout=0.0):
# timeout has likely occurred.

deadlock_timeout = 0.1

if timeout >= deadlock_timeout:
self._detect_deadlock = True

@@ -362,6 +369,7 @@ def connect_to_data_collector(self, activate_agent):
None, self._app_name, self.linked_applications, environment_settings()
)
except ForceAgentDisconnect:
self._agent_control.set_health_status(HealthStatus.FAILED_NR_CONNECTION.value)
# Any disconnect exception means we should stop trying to connect
_logger.error(
"The New Relic service has requested that the agent "
@@ -372,6 +380,7 @@ def connect_to_data_collector(self, activate_agent):
)
return
except NetworkInterfaceException:
self._agent_control.set_health_status(HealthStatus.FAILED_NR_CONNECTION.value)
active_session = None
except Exception:
# If an exception occurs after agent has been flagged to be
@@ -381,6 +390,7 @@ def connect_to_data_collector(self, activate_agent):
# the application is still running.

if not self._agent_shutdown and not self._pending_shutdown:
self._agent_control.set_health_status(HealthStatus.FAILED_NR_CONNECTION.value)
_logger.exception(
"Unexpected exception when registering "
"agent with the data collector. If this problem "
@@ -491,6 +501,8 @@ def connect_to_data_collector(self, activate_agent):
# data from a prior agent run for this application.

configuration = active_session.configuration
# Check if the agent previously had an unhealthy status related to the data collector and update
self._agent_control.update_to_healthy_status(collector_error=True)

with self._stats_lock:
self._stats_engine.reset_stats(configuration, reset_stream=True)
@@ -590,6 +602,13 @@ def connect_to_data_collector(self, activate_agent):
1,
)

# Agent Control health check metric
if self._agent_control.health_check_enabled:
internal_metric(
"Supportability/AgentControl/Health/enabled",
1,
)

self._stats_engine.merge_custom_metrics(internal_metrics.metrics())

# Update the active session in this object. This will the
@@ -665,7 +684,7 @@ def validate_process(self):
self._process_id = 0

def normalize_name(self, name, rule_type):
"""Applies the agent normalization rules of the the specified
"""Applies the agent normalization rules of the specified
rule type to the supplied name.
"""
@@ -1688,6 +1707,9 @@ def internal_agent_shutdown(self, restart=False):
optionally triggers activation of a new session.
"""
self._agent_control.set_health_status(HealthStatus.AGENT_SHUTDOWN.value)
if self._agent_control.health_check_enabled:
self._agent_control.write_to_health_file()

# We need to stop any thread profiler session related to this
# application.
119 changes: 119 additions & 0 deletions newrelic/hooks/datastore_cassandradriver.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from newrelic.api.database_trace import DatabaseTrace, register_database_client
from newrelic.api.function_trace import wrap_function_trace
from newrelic.api.time_trace import current_trace
from newrelic.common.object_wrapper import wrap_function_wrapper
from newrelic.common.signature import bind_args

DBAPI2_MODULE = None
DEFAULT = object()


def wrap_Session_execute(wrapped, instance, args, kwargs):
# Most of this wrapper is lifted from DBAPI2 wrappers, which can't be used
# directly since Cassandra doesn't actually conform to DBAPI2.

trace = current_trace()
if not trace or trace.terminal_node():
# Exit early there's no transaction, or if we're under an existing DatabaseTrace
return wrapped(*args, **kwargs)

bound_args = bind_args(wrapped, args, kwargs)

sql_parameters = bound_args.get("parameters", None)

sql = bound_args.get("query", None)
if not isinstance(sql, str):
statement = getattr(sql, "prepared_statement", sql) # Unbind BoundStatement
sql = getattr(statement, "query_string", statement) # Unpack query from SimpleStatement and PreparedStatement

database_name = getattr(instance, "keyspace", None)

host = None
port = None
try:
contact_points = instance.cluster.contact_points
if len(contact_points) == 1:
contact_point = next(iter(contact_points))
if isinstance(contact_point, str):
host = contact_point
port = instance.cluster.port
elif isinstance(contact_point, tuple):
host, port = contact_point
else: # Handle cassandra.connection.Endpoint types
host = contact_point.address
port = contact_point.port
except Exception:
pass

if sql_parameters is not DEFAULT:
with DatabaseTrace(
sql=sql,
sql_parameters=sql_parameters,
execute_params=(args, kwargs),
host=host,
port_path_or_id=port,
database_name=database_name,
dbapi2_module=DBAPI2_MODULE,
source=wrapped,
):
return wrapped(*args, **kwargs)
else:
with DatabaseTrace(
sql=sql,
execute_params=(args, kwargs),
host=host,
port_path_or_id=port,
database_name=database_name,
dbapi2_module=DBAPI2_MODULE,
source=wrapped,
):
return wrapped(*args, **kwargs)


def instrument_cassandra(module):
# Cassandra isn't DBAPI2 compliant, but we need the DatabaseTrace to function properly. We can set parameters
# for CQL parsing and the product name here, and leave the explain plan functionality unused.
global DBAPI2_MODULE
DBAPI2_MODULE = module

register_database_client(
module,
database_product="Cassandra",
quoting_style="single+double",
explain_query=None,
explain_stmts=(),
instance_info=None, # Already handled in wrappers
)


def instrument_cassandra_cluster(module):
if hasattr(module, "Session"):
# Cluster connect instrumentation, normally supplied by DBAPI2ConnectionFactory
wrap_function_trace(
module, "Cluster.connect", terminal=True, rollup=["Datastore/all", "Datastore/Cassandra/all"]
)

# Currently Session.execute() is a wrapper for calling Session.execute_async() and immediately waiting for
# the result. We therefore need to instrument Session.execute() in order to get timing information for sync
# query executions. We also need to instrument Session.execute_async() to at least get metrics for async
# queries, but we can't get timing information from that alone. We also need to add an early exit condition
# for when instrumentation for Session.execute_async() is called within Session.execute().
wrap_function_wrapper(module, "Session.execute", wrap_Session_execute)

# This wrapper only provides metrics, and not proper timing for async queries as they are distributed across
# potentially many threads at once. This is left uninstrumented for the time being.
wrap_function_wrapper(module, "Session.execute_async", wrap_Session_execute)
4 changes: 3 additions & 1 deletion newrelic/hooks/framework_starlette.py
Original file line number Diff line number Diff line change
@@ -252,7 +252,9 @@ def instrument_starlette_middleware_exceptions(module):
def instrument_starlette_exceptions(module):
# ExceptionMiddleware was moved to starlette.middleware.exceptions, need to check
# that it isn't being imported through a deprecation and double wrapped.
if not hasattr(module, "__deprecated__"):

# After v0.45.0, the import proxy for ExceptionMiddleware was removed from starlette.exceptions
if not hasattr(module, "__deprecated__") and hasattr(module, "ExceptionMiddleware"):

wrap_function_wrapper(module, "ExceptionMiddleware.__call__", error_middleware_wrapper)

16 changes: 16 additions & 0 deletions newrelic/hooks/logger_structlog.py
Original file line number Diff line number Diff line change
@@ -54,6 +54,17 @@ def new_relic_event_consumer(logger, level, event):
elif isinstance(event, dict):
message = original_message = event.get("event", "")
event_attrs = {k: v for k, v in event.items() if k != "event"}
elif isinstance(event, tuple):
try:
# This accounts for the ProcessorFormatter format:
# tuple[tuple[EventDict], dict[str, dict[str, Any]]]
_event = event[0][0]
message = original_message = _event.get("event")
event_attrs = {k: v for k, v in _event.items() if k != "event"}
except:
# In the case that this is a tuple but not in the
# ProcessorFormatter format. Unclear how to proceed.
return event
else:
# Unclear how to proceed, ignore log. Avoid logging an error message or we may incur an infinite loop.
return event
@@ -64,6 +75,11 @@ def new_relic_event_consumer(logger, level, event):
event = message
elif isinstance(event, dict) and "event" in event:
event["event"] = message
else:
try:
event[0][0]["event"] = message
except:
pass

level_name = normalize_level_name(level)

9 changes: 4 additions & 5 deletions newrelic/hooks/mlmodel_langchain.py
Original file line number Diff line number Diff line change
@@ -59,8 +59,7 @@
"langchain_community.vectorstores.documentdb": "DocumentDBVectorSearch",
"langchain_community.vectorstores.duckdb": "DuckDB",
"langchain_community.vectorstores.ecloud_vector_search": "EcloudESVectorStore",
"langchain_community.vectorstores.elastic_vector_search": "ElasticVectorSearch",
# "langchain_community.vectorstores.elastic_vector_search": "ElasticKnnSearch", # Deprecated
"langchain_community.vectorstores.elastic_vector_search": ["ElasticVectorSearch", "ElasticKnnSearch"],
"langchain_community.vectorstores.elasticsearch": "ElasticsearchStore",
"langchain_community.vectorstores.epsilla": "Epsilla",
"langchain_community.vectorstores.faiss": "FAISS",
@@ -93,7 +92,7 @@
"langchain_community.vectorstores.pgvector": "PGVector",
"langchain_community.vectorstores.pinecone": "Pinecone",
"langchain_community.vectorstores.qdrant": "Qdrant",
"langchain_community.vectorstores.redis.base": "Redis",
"langchain_community.vectorstores.redis": "Redis",
"langchain_community.vectorstores.relyt": "Relyt",
"langchain_community.vectorstores.rocksetdb": "Rockset",
"langchain_community.vectorstores.scann": "ScaNN",
@@ -105,6 +104,7 @@
"langchain_community.vectorstores.starrocks": "StarRocks",
"langchain_community.vectorstores.supabase": "SupabaseVectorStore",
"langchain_community.vectorstores.surrealdb": "SurrealDBStore",
"langchain_community.vectorstores.tablestore": "TablestoreVectorStore",
"langchain_community.vectorstores.tair": "Tair",
"langchain_community.vectorstores.tencentvectordb": "TencentVectorDB",
"langchain_community.vectorstores.tidb_vector": "TiDBVectorStore",
@@ -125,8 +125,7 @@
"langchain_community.vectorstores.yellowbrick": "Yellowbrick",
"langchain_community.vectorstores.zep_cloud": "ZepCloudVectorStore",
"langchain_community.vectorstores.zep": "ZepVectorStore",
"langchain_community.vectorstores.docarray.hnsw": "DocArrayHnswSearch",
"langchain_community.vectorstores.docarray.in_memory": "DocArrayInMemorySearch",
"langchain_community.vectorstores.docarray": ["DocArrayHnswSearch", "DocArrayInMemorySearch"],
}


46 changes: 45 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -6,7 +6,43 @@ include = '\.pyi?$'
profile = "black"

[tool.pylint.messages_control]
disable = "C0103,C0114,C0115,C0116,C0209,C0302,C0415,E0401,E1120,R0205,R0401,R0801,R0902,R0903,R0904,R0911,R0912,R0913,R0914,R0915,R1705,R1710,R1725,W0201,W0212,W0223,W0402,W0603,W0612,W0613,W0702,W0703,W0706,line-too-long,redefined-outer-name"
disable = [
"C0103",
"C0114",
"C0115",
"C0116",
"C0209",
"C0302",
"C0415",
"E0401",
"E1120",
"R0205",
"R0401",
"R0801",
"R0902",
"R0903",
"R0904",
"R0911",
"R0912",
"R0913",
"R0914",
"R0915",
"R1705",
"R1710",
"R1725",
"W0201",
"W0212",
"W0223",
"W0402",
"W0603",
"W0612",
"W0613",
"W0702",
"W0703",
"W0706",
"line-too-long",
"redefined-outer-name",
]

[tool.pylint.format]
max-line-length = "120"
@@ -16,3 +52,11 @@ good-names = "exc,val,tb"

[tool.bandit]
skips = ["B110", "B101", "B404"]

[tool.flynt]
line-length = 999999
aggressive = true
transform-concats = true
transform-joins = true
exclude = ["newrelic/packages/", "setup.py"]
# setup.py needs to not immediately crash on Python 2 to log error messages, so disable fstrings
11 changes: 8 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
@@ -20,8 +20,10 @@
if python_version >= (3, 7):
pass
else:
error_msg = "The New Relic Python agent only supports Python 3.7+. We recommend upgrading to a newer version of Python."

error_msg = (
"The New Relic Python agent only supports Python 3.7+. We recommend upgrading to a newer version of Python."
)

try:
# Lookup table for the last agent versions to support each Python version.
last_supported_version_lookup = {
@@ -36,7 +38,10 @@

if last_supported_version:
python_version_str = "%s.%s" % (python_version[0], python_version[1])
error_msg += " The last agent version to support Python %s was v%s." % (python_version_str, last_supported_version)
error_msg += " The last agent version to support Python %s was v%s." % (
python_version_str,
last_supported_version,
)
except Exception:
pass

256 changes: 256 additions & 0 deletions tests/agent_features/test_agent_control_health_check.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,256 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
import threading
import time

import pytest
from testing_support.fixtures import initialize_agent
from testing_support.http_client_recorder import HttpClientRecorder

from newrelic.config import _reset_configuration_done, initialize
from newrelic.core.agent_control_health import (
HealthStatus,
agent_control_health_instance,
is_valid_file_delivery_location,
)
from newrelic.core.agent_protocol import AgentProtocol
from newrelic.core.application import Application
from newrelic.core.config import finalize_application_settings, global_settings
from newrelic.network.exceptions import DiscardDataForRequest


def get_health_file_contents(tmp_path):
# Grab the file we just wrote to and read its contents
health_files = os.listdir(tmp_path)
path_to_written_file = f"{tmp_path}/{health_files[0]}"
with open(path_to_written_file, "r") as f:
contents = f.readlines()
return contents


@pytest.mark.parametrize("file_uri", ["", "file://", "/test/dir", "foo:/test/dir"])
def test_invalid_file_directory_supplied(file_uri):
assert not is_valid_file_delivery_location(file_uri)


def test_agent_control_not_enabled(monkeypatch, tmp_path):
# Only monkeypatch a valid file URI for delivery location to test default "NEW_RELIC_AGENT_CONTROL_ENABLED" behavior
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

assert not agent_control_health_instance().health_check_enabled


def test_write_to_file_healthy_status(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

# Write to health YAML file
agent_control_instance = agent_control_health_instance()
agent_control_instance.start_time_unix_nano = "1234567890"
agent_control_instance.write_to_health_file()

contents = get_health_file_contents(tmp_path)

# Assert on contents of health file
assert len(contents) == 4
assert contents[0] == "healthy: True\n"
assert contents[1] == "status: Healthy\n"
assert int(re.search(r"status_time_unix_nano: (\d+)", contents[3]).group(1)) > 0


def test_write_to_file_unhealthy_status(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

# Write to health YAML file
agent_control_instance = agent_control_health_instance()
agent_control_instance.start_time_unix_nano = "1234567890"
agent_control_instance.set_health_status(HealthStatus.INVALID_LICENSE.value)

agent_control_instance.write_to_health_file()

contents = get_health_file_contents(tmp_path)

# Assert on contents of health file
assert len(contents) == 5
assert contents[0] == "healthy: False\n"
assert contents[1] == "status: Invalid license key (HTTP status code 401)\n"
assert contents[2] == "start_time_unix_nano: 1234567890\n"
assert int(re.search(r"status_time_unix_nano: (\d+)", contents[3]).group(1)) > 0
assert contents[4] == "last_error: NR-APM-001\n"


def test_no_override_on_unhealthy_shutdown(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

# Write to health YAML file
agent_control_instance = agent_control_health_instance()
agent_control_instance.start_time_unix_nano = "1234567890"
agent_control_instance.set_health_status(HealthStatus.INVALID_LICENSE.value)

# Attempt to override a previously unhealthy status
agent_control_instance.set_health_status(HealthStatus.AGENT_SHUTDOWN.value)
agent_control_instance.write_to_health_file()

contents = get_health_file_contents(tmp_path)

# Assert on contents of health file
assert len(contents) == 5
assert contents[0] == "healthy: False\n"
assert contents[1] == "status: Invalid license key (HTTP status code 401)\n"
assert contents[4] == "last_error: NR-APM-001\n"


def test_health_check_running_threads(monkeypatch, tmp_path):
running_threads = threading.enumerate()
# Only the main thread should be running since not agent control env vars are set
assert len(running_threads) == 1

monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

# Re-initialize the agent to allow the health check thread to start and assert that it did
_reset_configuration_done()
initialize()

running_threads = threading.enumerate()

# Two expected threads: One main agent thread and one main health thread since we have no additional active sessions
assert len(running_threads) == 2
assert running_threads[1].name == "Agent-Control-Health-Main-Thread"


def test_proxy_error_status(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

_reset_configuration_done()
initialize()

# Mock a 407 error to generate a proxy error health status
HttpClientRecorder.STATUS_CODE = 407
settings = finalize_application_settings(
{
"license_key": "123LICENSEKEY",
}
)
protocol = AgentProtocol(settings, client_cls=HttpClientRecorder)

with pytest.raises(DiscardDataForRequest):
protocol.send("analytic_event_data")

# Give time for the scheduler to kick in and write to the health file
time.sleep(5)

contents = get_health_file_contents(tmp_path)

# Assert on contents of health file
assert len(contents) == 5
assert contents[0] == "healthy: False\n"
assert contents[1] == "status: HTTP Proxy configuration error; response code 407\n"
assert contents[4] == "last_error: NR-APM-007\n"


def test_multiple_activations_running_threads(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

_reset_configuration_done()
initialize()

application_1 = Application("Test App 1")
application_2 = Application("Test App 2")

application_1.activate_session()
application_2.activate_session()

running_threads = threading.enumerate()

# 6 threads expected: One main agent thread, two active session threads, one main health check thread, and two
# active session health threads
assert len(running_threads) == 6
assert running_threads[1].name == "Agent-Control-Health-Main-Thread"
assert running_threads[2].name == "Agent-Control-Health-Session-Thread"
assert running_threads[4].name == "Agent-Control-Health-Session-Thread"


def test_update_to_healthy(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

_reset_configuration_done()

# Write to health YAML file
agent_control_instance = agent_control_health_instance()
agent_control_instance.start_time_unix_nano = "1234567890"
agent_control_instance.set_health_status(HealthStatus.FORCED_DISCONNECT.value)

# Send a successful data batch to enable health status to update to "healthy"
HttpClientRecorder.STATUS_CODE = 200
settings = finalize_application_settings(
{
"license_key": "123LICENSEKEY",
}
)
protocol = AgentProtocol(settings, client_cls=HttpClientRecorder)
protocol.send("analytic_event_data")

agent_control_instance.write_to_health_file()

contents = get_health_file_contents(tmp_path)

# Assert on contents of health file
assert contents[0] == "healthy: True\n"
assert contents[1] == "status: Healthy\n"


def test_max_app_name_status(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

_reset_configuration_done()
initialize_agent(app_name="test1;test2;test3;test4")
# Give time for the scheduler to kick in and write to the health file
time.sleep(5)

contents = get_health_file_contents(tmp_path)

# Assert on contents of health file
assert len(contents) == 5
assert contents[0] == "healthy: False\n"
assert contents[1] == "status: The maximum number of configured app names (3) exceeded\n"
assert contents[4] == "last_error: NR-APM-006\n"

# Set app name back to original name specific
settings = global_settings()
settings.app_name = "Python Agent Test (agent_features)"
22 changes: 13 additions & 9 deletions tests/agent_features/test_logs_in_context.py
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@
import sys

from io import StringIO as Buffer
from traceback import format_tb
from traceback import format_exception

import pytest

@@ -237,12 +237,14 @@ def test_newrelic_logger_error_inside_transaction_no_stack_trace(log_buffer):

@background_task()
def test_newrelic_logger_error_inside_transaction_with_stack_trace(log_buffer_with_stack_trace):
expected_stack_trace = ""
try:
raise ExceptionForTest
try:
raise ExceptionForTest("cause")
except ExceptionForTest:
raise ExceptionForTest("exception-with-cause")
except ExceptionForTest:
_logger.exception("oops")
expected_stack_trace = "".join(format_tb(sys.exc_info()[2]))
expected_stack_trace = "".join(format_exception(*sys.exc_info()))

log_buffer_with_stack_trace.seek(0)
message = json.load(log_buffer_with_stack_trace)
@@ -271,7 +273,7 @@ def test_newrelic_logger_error_inside_transaction_with_stack_trace(log_buffer_wi
"thread.name": "MainThread",
"process.name": "MainProcess",
"error.class": "test_logs_in_context:ExceptionForTest",
"error.message": "",
"error.message": "exception-with-cause",
"error.expected": False
}
expected_extra_txn_keys = (
@@ -331,12 +333,14 @@ def test_newrelic_logger_error_outside_transaction_no_stack_trace(log_buffer):


def test_newrelic_logger_error_outside_transaction_with_stack_trace(log_buffer_with_stack_trace):
expected_stack_trace = ""
try:
raise ExceptionForTest
try:
raise ExceptionForTest("cause")
except ExceptionForTest:
raise ExceptionForTest("exception-with-cause")
except ExceptionForTest:
_logger.exception("oops")
expected_stack_trace = "".join(format_tb(sys.exc_info()[2]))
expected_stack_trace = "".join(format_exception(*sys.exc_info()))

log_buffer_with_stack_trace.seek(0)
message = json.load(log_buffer_with_stack_trace)
@@ -365,7 +369,7 @@ def test_newrelic_logger_error_outside_transaction_with_stack_trace(log_buffer_w
"thread.name": "MainThread",
"process.name": "MainProcess",
"error.class": "test_logs_in_context:ExceptionForTest",
"error.message": "",
"error.message": "exception-with-cause",
}
expected_extra_txn_keys = (
"entity.guid",
23 changes: 23 additions & 0 deletions tests/agent_unittests/test_agent_connect.py
Original file line number Diff line number Diff line change
@@ -101,3 +101,26 @@ def test_ml_streaming_disabled_supportability_metrics():
app.connect_to_data_collector(None)

assert app._active_session


@override_generic_settings(
SETTINGS,
{
"developer_mode": True,
},
)
@validate_internal_metrics(
[
("Supportability/AgentControl/Health/enabled", 1),
]
)
def test_agent_control_health_supportability_metric(monkeypatch, tmp_path):
# Setup expected env vars to run agent control health check
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_ENABLED", True)
file_path = tmp_path.as_uri()
monkeypatch.setenv("NEW_RELIC_AGENT_CONTROL_HEALTH_DELIVERY_LOCATION", file_path)

app = Application("Python Agent Test (agent_unittests-connect)")
app.connect_to_data_collector(None)

assert app._active_session
40 changes: 5 additions & 35 deletions tests/agent_unittests/test_agent_protocol.py
Original file line number Diff line number Diff line change
@@ -16,7 +16,6 @@
import os
import ssl
import tempfile
from collections import namedtuple

import pytest

@@ -35,8 +34,7 @@
NetworkInterfaceException,
RetryDataForRequest,
)

Request = namedtuple("Request", ("method", "path", "params", "headers", "payload"))
from testing_support.http_client_recorder import HttpClientRecorder


# Global constants used in tests
@@ -64,36 +62,6 @@
ERROR_EVENT_DATA = 100


class HttpClientRecorder(DeveloperModeClient):
SENT = []
STATUS_CODE = None
STATE = 0

def send_request(
self,
method="POST",
path="/agent_listener/invoke_raw_method",
params=None,
headers=None,
payload=None,
):
request = Request(method=method, path=path, params=params, headers=headers, payload=payload)
self.SENT.append(request)
if self.STATUS_CODE:
return self.STATUS_CODE, b""

return super(HttpClientRecorder, self).send_request(method, path, params, headers, payload)

def __enter__(self):
HttpClientRecorder.STATE += 1

def __exit__(self, exc, value, tb):
HttpClientRecorder.STATE -= 1

def close_connection(self):
HttpClientRecorder.STATE -= 1


class HttpClientException(DeveloperModeClient):
def send_request(
self,
@@ -332,7 +300,9 @@ def connect_payload_asserts(
else:
assert "ip_address" not in payload_data["utilization"]

utilization_len = utilization_len + any([with_aws, with_ecs, with_pcf, with_gcp, with_azure, with_docker, with_kubernetes])
utilization_len = utilization_len + any(
[with_aws, with_ecs, with_pcf, with_gcp, with_azure, with_docker, with_kubernetes]
)
assert len(payload_data["utilization"]) == utilization_len
assert payload_data["utilization"]["hostname"] == HOST

@@ -580,7 +550,7 @@ def test_audit_logging():
)
def test_ca_bundle_path(monkeypatch, ca_bundle_path):
# Pretend CA certificates are not available
class DefaultVerifyPaths():
class DefaultVerifyPaths:
cafile = None
capath = None

13 changes: 9 additions & 4 deletions tests/cross_agent/test_ecs_data.py
Original file line number Diff line number Diff line change
@@ -13,15 +13,20 @@
# limitations under the License.

import os

import pytest
import newrelic.common.utilization as u
from fixtures.ecs_container_id.ecs_mock_server import mock_server, bad_response_mock_server
from fixtures.ecs_container_id.ecs_mock_server import (
bad_response_mock_server,
mock_server,
)
from test_pcf_utilization_data import Environ

import newrelic.common.utilization as u


@pytest.mark.parametrize("env_key", ["ECS_CONTAINER_METADATA_URI_V4", "ECS_CONTAINER_METADATA_URI"])
def test_ecs_docker_container_id(env_key, mock_server):
mock_endpoint = "http://localhost:%d" % mock_server.port
mock_endpoint = f"http://localhost:{int(mock_server.port)}"
env_dict = {env_key: mock_endpoint}

with Environ(env_dict):
@@ -41,7 +46,7 @@ def test_ecs_docker_container_id_bad_uri(env_dict, mock_server):


def test_ecs_docker_container_id_bad_response(bad_response_mock_server):
mock_endpoint = "http://localhost:%d" % bad_response_mock_server.port
mock_endpoint = f"http://localhost:{int(bad_response_mock_server.port)}"
env_dict = {"ECS_CONTAINER_METADATA_URI": mock_endpoint}

with Environ(env_dict):
91 changes: 91 additions & 0 deletions tests/datastore_cassandradriver/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import sys

import pytest
from testing_support.db_settings import cassandra_settings
from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611
collector_agent_registration_fixture,
collector_available_fixture,
)

DB_SETTINGS = cassandra_settings()
PYTHON_VERSION = sys.version_info
IS_PYPY = hasattr(sys, "pypy_version_info")


_default_settings = {
"package_reporting.enabled": False, # Turn off package reporting for testing as it causes slow downs.
"transaction_tracer.explain_threshold": 0.0,
"transaction_tracer.transaction_threshold": 0.0,
"transaction_tracer.stack_trace_threshold": 0.0,
"debug.log_data_collector_payloads": True,
"debug.record_transaction_failure": True,
"debug.log_explain_plan_queries": True,
}

collector_agent_registration = collector_agent_registration_fixture(
app_name="Python Agent Test (datastore_cassandradriver)",
default_settings=_default_settings,
linked_applications=["Python Agent Test (datastore)"],
)


@pytest.fixture(scope="function", params=["Libev", "AsyncCore", "Twisted"])
def connection_class(request):
# Configure tests to run against a specific async reactor.
reactor_name = request.param

if reactor_name == "Libev":
if IS_PYPY:
pytest.skip(reason="Libev not available for PyPy.")
from cassandra.io.libevreactor import LibevConnection as Connection
elif reactor_name == "AsyncCore":
if PYTHON_VERSION >= (3, 12):
pytest.skip(reason="asyncore was removed from stdlib in Python 3.12.")
from cassandra.io.asyncorereactor import AsyncoreConnection as Connection
elif reactor_name == "Twisted":
from cassandra.io.twistedreactor import TwistedConnection as Connection
elif reactor_name == "AsyncIO":
# AsyncIO reactor is experimental and currently non-functional. Not testing it yet.
from cassandra.io.asyncioreactor import AsyncioConnection as Connection

return Connection


@pytest.fixture(scope="function")
def cluster_options(connection_class):
from cassandra.cluster import ExecutionProfile
from cassandra.policies import RoundRobinPolicy

load_balancing_policy = RoundRobinPolicy()
execution_profiles = {
"default": ExecutionProfile(load_balancing_policy=load_balancing_policy, request_timeout=60.0)
}
cluster_options = {
"contact_points": [(node["host"], node["port"]) for node in DB_SETTINGS],
"execution_profiles": execution_profiles,
"connection_class": connection_class,
"protocol_version": 4,
}
yield cluster_options


@pytest.fixture(scope="function")
def cluster(cluster_options):
from cassandra.cluster import Cluster

cluster = Cluster(**cluster_options)
yield cluster
172 changes: 172 additions & 0 deletions tests/datastore_cassandradriver/test_cassandra.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import pytest
from cassandra.query import SimpleStatement
from testing_support.db_settings import cassandra_settings
from testing_support.util import instance_hostname
from testing_support.validators.validate_transaction_metrics import (
validate_transaction_metrics,
)

from newrelic.api.background_task import background_task

DB_SETTINGS = cassandra_settings()[0]
HOST = instance_hostname(DB_SETTINGS["host"])
PORT = DB_SETTINGS["port"]
KEYSPACE = DB_SETTINGS["keyspace"]
TABLE_NAME = DB_SETTINGS["table_name"]
FULL_TABLE_NAME = f"{KEYSPACE}.{TABLE_NAME}" # Fully qualified table name with keyspace
REPLICATION_STRATEGY = "{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"


@pytest.fixture(scope="function")
def exercise(cluster):
def _exercise(is_async=False):
with cluster.connect() as session:
# Run async queries with execute_async() and .result(), and sync queries with execute()
if is_async:
execute_query = lambda query, *args: session.execute_async(query, *args).result()
else:
execute_query = lambda query, *args: session.execute(query, *args)

execute_query(
f"create keyspace if not exists {KEYSPACE} with replication = {REPLICATION_STRATEGY} and durable_writes = false;",
)
session.set_keyspace(KEYSPACE) # Should be captured as "USE" query

# Alter keyspace to enable durable writes
execute_query(
f"alter keyspace {KEYSPACE} with replication = {REPLICATION_STRATEGY} and durable_writes = true;"
)

execute_query(f"drop table if exists {FULL_TABLE_NAME}")
execute_query(f"create table {FULL_TABLE_NAME} (a int, b double, primary key (a))")
execute_query(f"alter table {FULL_TABLE_NAME} add c ascii")

execute_query(f"create index {TABLE_NAME}_index on {FULL_TABLE_NAME} (c)")

execute_query(
f"insert into {FULL_TABLE_NAME} (a, b, c) values (%(a)s, %(b)s, %(c)s)",
{"a": 1, "b": 1.0, "c": "1.0"},
)

execute_query(
f"""
begin batch
insert into {FULL_TABLE_NAME} (a, b, c) values (%(a1)s, %(b1)s, %(c1)s);
insert into {FULL_TABLE_NAME} (a, b, c) values (%(a2)s, %(b2)s, %(c2)s);
apply batch;
""",
{"a1": 2, "b1": 2.2, "c1": "2.2", "a2": 3, "b2": 3.3, "c2": "3.3"},
)

cursor = execute_query(f"select * from {FULL_TABLE_NAME}")
_ = [row for row in cursor]

execute_query(
f"update {FULL_TABLE_NAME} set b=%(b)s, c=%(c)s where a=%(a)s",
{"a": 1, "b": 4.0, "c": "4.0"},
)
execute_query(f"delete from {FULL_TABLE_NAME} where a=2")
execute_query(f"truncate {FULL_TABLE_NAME}")

# SimpleStatement
simple_statement = SimpleStatement(
f"insert into {FULL_TABLE_NAME} (a, b, c) values (%s, %s, %s)",
)
execute_query(
simple_statement,
(6, 6.0, "6.0"),
)

# PreparedStatement
prepared_statement = session.prepare(
f"insert into {FULL_TABLE_NAME} (a, b, c) values (?, ?, ?)",
)
execute_query(
prepared_statement,
(5, 5.0, "5.0"),
)

# BoundStatement
bound_statement = prepared_statement.bind((7, 7.0, "7.0"))
execute_query(
bound_statement,
{"a": 5, "b": 5.0, "c": "5.0"},
)

execute_query(f"drop index {TABLE_NAME}_index")
execute_query(f"drop table {FULL_TABLE_NAME}")
execute_query(f"drop keyspace {KEYSPACE}")

return _exercise


_test_execute_scoped_metrics = [
("Function/cassandra.cluster:Cluster.connect", 1),
("Datastore/operation/Cassandra/alter", 2),
("Datastore/operation/Cassandra/begin", 1),
("Datastore/operation/Cassandra/create", 3),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/delete", 1),
("Datastore/operation/Cassandra/drop", 4),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/insert", 4),
("Datastore/operation/Cassandra/other", 2),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/select", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/update", 1),
]

_test_execute_rollup_metrics = [
("Function/cassandra.cluster:Cluster.connect", 1),
(f"Datastore/instance/Cassandra/{HOST}/{PORT}", 19),
("Datastore/all", 20),
("Datastore/allOther", 20),
("Datastore/Cassandra/all", 20),
("Datastore/Cassandra/allOther", 20),
("Datastore/operation/Cassandra/alter", 2),
("Datastore/operation/Cassandra/begin", 1),
("Datastore/operation/Cassandra/create", 3),
("Datastore/operation/Cassandra/delete", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/delete", 1),
("Datastore/operation/Cassandra/drop", 4),
("Datastore/operation/Cassandra/insert", 4),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/insert", 4),
("Datastore/operation/Cassandra/other", 2),
("Datastore/operation/Cassandra/select", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/select", 1),
("Datastore/operation/Cassandra/update", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/update", 1),
]


@validate_transaction_metrics(
"test_cassandra:test_execute",
scoped_metrics=_test_execute_scoped_metrics,
rollup_metrics=_test_execute_rollup_metrics,
background_task=True,
)
@background_task()
def test_execute(exercise):
exercise(is_async=False)


@validate_transaction_metrics(
"test_cassandra:test_execute_async",
scoped_metrics=_test_execute_scoped_metrics,
rollup_metrics=_test_execute_rollup_metrics,
background_task=True,
)
@background_task()
def test_execute_async(exercise):
exercise(is_async=True)
144 changes: 144 additions & 0 deletions tests/datastore_cassandradriver/test_cqlengine.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os

import pytest
from cassandra.cqlengine import columns, connection
from cassandra.cqlengine.management import (
create_keyspace_simple,
drop_keyspace,
drop_table,
sync_table,
)
from cassandra.cqlengine.models import Model
from cassandra.cqlengine.query import BatchQuery
from testing_support.db_settings import cassandra_settings
from testing_support.util import instance_hostname
from testing_support.validators.validate_transaction_metrics import (
validate_transaction_metrics,
)

from newrelic.api.background_task import background_task

DB_SETTINGS = cassandra_settings()[0]
HOST = instance_hostname(DB_SETTINGS["host"])
PORT = DB_SETTINGS["port"]
KEYSPACE = f"{DB_SETTINGS['keyspace']}_orm"
TABLE_NAME = f"{DB_SETTINGS['table_name']}_orm"
FULL_TABLE_NAME = f"{KEYSPACE}.{TABLE_NAME}" # Fully qualified table name with keyspace


class ABCModel(Model):
__keyspace__ = KEYSPACE
__table_name__ = TABLE_NAME

a = columns.Integer(primary_key=True)
b = columns.Double()


@pytest.fixture(scope="function")
def exercise(cluster_options):
# Silence warning from cqlengine when creating tables
os.environ["CQLENG_ALLOW_SCHEMA_MANAGEMENT"] = "true"

def _exercise():
host = cluster_options.pop("contact_points")
connection.setup(host, KEYSPACE, **cluster_options)

# Initial setup
create_keyspace_simple(KEYSPACE, replication_factor=1)

# Create initial model
class ABModel(Model):
__keyspace__ = KEYSPACE
__table_name__ = TABLE_NAME

a = columns.Integer(primary_key=True)
b = columns.Double()

drop_table(ABModel) # Drop initial table if exists
sync_table(ABModel) # Create table

# Alter model via inheritance
class ABCModel(ABModel):
c = columns.Text(index=True)

sync_table(ABCModel) # Alter table to add colummn

m1 = ABCModel.create(a=1, b=1.0, c="1.0") # Insert query

# Batch insert query
with BatchQuery() as b:
ABCModel.batch(b).create(a=2, b=2.0, c="2.0")
ABCModel.batch(b).create(a=3, b=3.0, c="3.0")

cursor = ABCModel.objects() # Select query
_ = [row for row in cursor]

# Update query
m1.update(**{"b": 4.0, "c": "4.0"})
m1.save()

ABCModel.filter(a=2).delete() # Delete query

drop_table(ABCModel)
drop_keyspace(KEYSPACE)

return _exercise


_test_execute_scoped_metrics = [
("Function/cassandra.cluster:Cluster.connect", 1),
("Datastore/operation/Cassandra/alter", 1),
("Datastore/operation/Cassandra/begin", 1),
("Datastore/operation/Cassandra/create", 3),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/delete", 1),
("Datastore/operation/Cassandra/drop", 2),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/insert", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/select", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/update", 1),
]

_test_execute_rollup_metrics = [
("Function/cassandra.cluster:Cluster.connect", 1),
(f"Datastore/instance/Cassandra/{HOST}/{PORT}", 11),
("Datastore/all", 12),
("Datastore/allOther", 12),
("Datastore/Cassandra/all", 12),
("Datastore/Cassandra/allOther", 12),
("Datastore/operation/Cassandra/alter", 1),
("Datastore/operation/Cassandra/begin", 1),
("Datastore/operation/Cassandra/create", 3),
("Datastore/operation/Cassandra/delete", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/delete", 1),
("Datastore/operation/Cassandra/drop", 2),
("Datastore/operation/Cassandra/insert", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/insert", 1),
("Datastore/operation/Cassandra/select", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/select", 1),
("Datastore/operation/Cassandra/update", 1),
(f"Datastore/statement/Cassandra/{FULL_TABLE_NAME}/update", 1),
]


@validate_transaction_metrics(
"test_cqlengine:test_model",
scoped_metrics=_test_execute_scoped_metrics,
rollup_metrics=_test_execute_rollup_metrics,
background_task=True,
)
@background_task()
def test_model(exercise):
exercise()
2 changes: 1 addition & 1 deletion tests/datastore_motor/test_collection.py
Original file line number Diff line number Diff line change
@@ -82,7 +82,7 @@ async def exercise_motor(db):
await collection.options()
collection.watch()

new_name = MONGODB_COLLECTION + "_renamed"
new_name = f"{MONGODB_COLLECTION}_renamed"
await collection.rename(new_name)
await db[new_name].drop()
await collection.drop()
2 changes: 1 addition & 1 deletion tests/datastore_pymongo/test_async_collection.py
Original file line number Diff line number Diff line change
@@ -89,7 +89,7 @@ async def _exercise_mongo(db):
await collection.find_one_and_update({"x": 301}, {"$inc": {"x": 300}})
await collection.options()

new_name = MONGODB_COLLECTION + "_renamed"
new_name = f"{MONGODB_COLLECTION}_renamed"
await collection.rename(new_name)
await db[new_name].drop()
await collection.drop()
2 changes: 1 addition & 1 deletion tests/datastore_pymongo/test_collection.py
Original file line number Diff line number Diff line change
@@ -127,7 +127,7 @@ def _exercise_mongo_v4(db):
collection.find_one_and_update({"x": 301}, {"$inc": {"x": 300}})
collection.options()

new_name = MONGODB_COLLECTION + "_renamed"
new_name = f"{MONGODB_COLLECTION}_renamed"
collection.rename(new_name)
db[new_name].drop()
collection.drop()
273 changes: 273 additions & 0 deletions tests/logger_structlog/test_processor_formatter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,273 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import platform

import pytest
from testing_support.fixtures import (
override_application_settings,
reset_core_stats_engine,
)
from testing_support.validators.validate_custom_metrics_outside_transaction import (
validate_custom_metrics_outside_transaction,
)
from testing_support.validators.validate_log_event_count import validate_log_event_count
from testing_support.validators.validate_log_event_count_outside_transaction import (
validate_log_event_count_outside_transaction,
)
from testing_support.validators.validate_log_events import validate_log_events
from testing_support.validators.validate_log_events_outside_transaction import (
validate_log_events_outside_transaction,
)
from testing_support.validators.validate_transaction_metrics import (
validate_transaction_metrics,
)

from newrelic.api.application import application_settings
from newrelic.api.background_task import background_task, current_transaction

"""
This file tests structlog's ability to render structlog-based
formatters within logging through structlog's `ProcessorFormatter` as
a `logging.Formatter` for both logging as well as structlog log entries.
"""


@pytest.fixture(scope="function")
def structlog_formatter_within_logging(structlog_caplog):
import logging

import structlog

class CaplogHandler(logging.StreamHandler):
"""
To prevent possible issues with pytest's monkey patching
use a custom Caplog handler to capture all records
"""

def __init__(self, *args, **kwargs):
self.records = []
super(CaplogHandler, self).__init__(*args, **kwargs)

def emit(self, record):
self.records.append(self.format(record))

structlog.configure(
processors=[
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=lambda *args, **kwargs: structlog_caplog,
)

handler = CaplogHandler()
formatter = structlog.stdlib.ProcessorFormatter(
processors=[structlog.dev.ConsoleRenderer()],
)

handler.setFormatter(formatter)

logging_logger = logging.getLogger()
logging_logger.addHandler(handler)
logging_logger.caplog = handler
logging_logger.setLevel(logging.WARNING)

structlog_logger = structlog.get_logger(logger_attr=2)

yield logging_logger, structlog_logger


@pytest.fixture
def exercise_both_loggers(set_trace_ids, structlog_formatter_within_logging, structlog_caplog):
def _exercise():
if current_transaction():
set_trace_ids()

logging_logger, structlog_logger = structlog_formatter_within_logging

logging_logger.info("Cat", a=42)
logging_logger.error("Dog")
logging_logger.critical("Elephant")

structlog_logger.info("Bird")
structlog_logger.error("Fish", events="water")
structlog_logger.critical("Giraffe")

assert len(structlog_caplog.caplog) == 3 # set to INFO level
assert len(logging_logger.caplog.records) == 2 # set to WARNING level

assert "Dog" in logging_logger.caplog.records[0]
assert "Elephant" in logging_logger.caplog.records[1]

assert "Bird" in structlog_caplog.caplog[0]["event"]
assert "Fish" in structlog_caplog.caplog[1]["event"]
assert "Giraffe" in structlog_caplog.caplog[2]["event"]

return _exercise


# Test attributes
# ---------------------


@validate_log_events(
[
{ # Fixed attributes
"message": "context_attrs: arg1",
"context.kwarg_attr": 1,
"context.logger_attr": 2,
}
],
)
@validate_log_event_count(1)
@background_task()
def test_processor_formatter_context_attributes(structlog_formatter_within_logging):
_, structlog_logger = structlog_formatter_within_logging
structlog_logger.error("context_attrs: %s", "arg1", kwarg_attr=1)


@validate_log_events([{"message": "A", "message.attr": 1}])
@validate_log_event_count(1)
@background_task()
def test_processor_formatter_message_attributes(structlog_formatter_within_logging):
_, structlog_logger = structlog_formatter_within_logging
structlog_logger.error({"message": "A", "attr": 1})


# Test local decorating
# ---------------------


def get_metadata_string(log_message, is_txn):
host = platform.uname()[1]
assert host
entity_guid = application_settings().entity_guid
if is_txn:
metadata_string = (
f"NR-LINKING|{entity_guid}|{host}|abcdefgh12345678|abcdefgh|Python%20Agent%20Test%20%28logger_structlog%29|"
)
else:
metadata_string = f"NR-LINKING|{entity_guid}|{host}|||Python%20Agent%20Test%20%28logger_structlog%29|"
formatted_string = f"{log_message} {metadata_string}"
return formatted_string


@reset_core_stats_engine()
def test_processor_formatter_local_log_decoration_inside_transaction(exercise_both_loggers, structlog_caplog):
@validate_log_event_count(5)
@background_task()
def test():
exercise_both_loggers()
assert get_metadata_string("Fish", True) in structlog_caplog.caplog[1]["event"]

test()


@reset_core_stats_engine()
def test_processor_formatter_local_log_decoration_outside_transaction(exercise_both_loggers, structlog_caplog):
@validate_log_event_count_outside_transaction(5)
def test():
exercise_both_loggers()
assert get_metadata_string("Fish", False) in structlog_caplog.caplog[1]["event"]

test()


# Test log forwarding
# ---------------------

_common_attributes_service_linking = {
"timestamp": None,
"hostname": None,
"entity.name": "Python Agent Test (logger_structlog)",
"entity.guid": None,
}

_common_attributes_trace_linking = {
"span.id": "abcdefgh",
"trace.id": "abcdefgh12345678",
**_common_attributes_service_linking,
}


@reset_core_stats_engine()
@override_application_settings({"application_logging.local_decorating.enabled": False})
def test_processor_formatter_logging_inside_transaction(exercise_both_loggers):
@validate_log_events(
[
{"message": "Dog", "level": "ERROR", **_common_attributes_trace_linking},
{"message": "Elephant", "level": "CRITICAL", **_common_attributes_trace_linking},
{"message": "Bird", "level": "INFO", **_common_attributes_trace_linking},
{"message": "Fish", "level": "ERROR", **_common_attributes_trace_linking},
{"message": "Giraffe", "level": "CRITICAL", **_common_attributes_trace_linking},
]
)
@validate_log_event_count(5)
@background_task()
def test():
exercise_both_loggers()

test()


@reset_core_stats_engine()
@override_application_settings({"application_logging.local_decorating.enabled": False})
def test_processor_formatter_logging_outside_transaction(exercise_both_loggers):
@validate_log_events_outside_transaction(
[
{"message": "Dog", "level": "ERROR", **_common_attributes_service_linking},
{"message": "Elephant", "level": "CRITICAL", **_common_attributes_service_linking},
{"message": "Bird", "level": "INFO", **_common_attributes_service_linking},
{"message": "Fish", "level": "ERROR", **_common_attributes_service_linking},
{"message": "Giraffe", "level": "CRITICAL", **_common_attributes_service_linking},
]
)
@validate_log_event_count_outside_transaction(5)
def test():
exercise_both_loggers()

test()


# Test metrics
# ---------------------

_test_logging_unscoped_metrics = [
("Logging/lines", 5),
("Logging/lines/INFO", 1),
("Logging/lines/ERROR", 2),
("Logging/lines/CRITICAL", 2),
]


@reset_core_stats_engine()
def test_processor_formatter_metrics_inside_transaction(exercise_both_loggers):
@validate_transaction_metrics(
"test_processor_formatter:test_processor_formatter_metrics_inside_transaction.<locals>.test",
custom_metrics=_test_logging_unscoped_metrics,
background_task=True,
)
@background_task()
def test():
exercise_both_loggers()

test()


@reset_core_stats_engine()
def test_processor_formatter_metrics_outside_transaction(exercise_both_loggers):
@validate_custom_metrics_outside_transaction(_test_logging_unscoped_metrics)
def test():
exercise_both_loggers()

test()
1 change: 1 addition & 0 deletions tests/mlmodel_langchain/conftest.py
Original file line number Diff line number Diff line change
@@ -73,6 +73,7 @@ def openai_clients(openai_version, MockExternalOpenAIServer): # noqa: F811
chat = ChatOpenAI(
base_url=f"http://localhost:{server.port}",
api_key="NOT-A-REAL-SECRET",
temperature=0.7,
)
embeddings = OpenAIEmbeddings(
openai_api_key="NOT-A-REAL-SECRET", openai_api_base=f"http://localhost:{server.port}"
100 changes: 100 additions & 0 deletions tests/mlmodel_langchain/new_vectorstore_adder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
"""
This script is used to automatically add new vectorstore classes to the newrelic-python-agent.
To run this script, start from the root of the newrelic-python-agent repository and run:
`python tests/mlmodel_langchain/new_vectorstore_adder.py`
This will generate the necessary code to instrument the new vectorstore classes in the local
copy of the newrelic-python-agent repository.
"""

import os

from langchain_community import vectorstores

from newrelic.hooks.mlmodel_langchain import VECTORSTORE_CLASSES

dir_path = os.path.dirname(os.path.realpath(__file__))
test_dir = os.path.abspath(os.path.join(dir_path, os.pardir))
REPO_PATH = os.path.abspath(os.path.join(test_dir, os.pardir))


def add_to_config(directory, instrumented_class=None):
# Only implement this if there is not an instrumented class within the directory already.
if instrumented_class:
return

with open(f"{REPO_PATH}/newrelic/config.py", "r+") as file:
text = file.read()
text = text.replace(
"VectorStores with similarity_search method",
"VectorStores with similarity_search method\n "
+ "_process_module_definition(\n "
+ f'"{directory}",\n '
+ '"newrelic.hooks.mlmodel_langchain",\n '
+ '"instrument_langchain_vectorstore_similarity_search",\n '
+ ")\n",
1,
)
file.seek(0)
file.write(text)


def add_to_hooks(class_name, directory, instrumented_class=None):
with open(f"{REPO_PATH}/newrelic/hooks/mlmodel_langchain.py", "r+") as file:
text = file.read()

# The directory does not exist yet. Add the new directory and class name to the beginning of the dictionary
if not instrumented_class:
text = text.replace(
"VECTORSTORE_CLASSES = {", "VECTORSTORE_CLASSES = {\n " + f'"{directory}": "{class_name}",', 1
)

# The directory exists, and there are multiple instrumented classes in it. Append to the list.
elif isinstance(instrumented_class, list):
original_list = str(instrumented_class).replace("'", '"')
instrumented_class.append(class_name)
instrumented_class = str(instrumented_class).replace("'", '"')
text = text.replace(
f'"{directory}": {original_list}', f'"{directory}": {instrumented_class}' # TODO: NOT WORKING
)

# The directory exists, but it only has one class. We need to convert this to a list.
else:
text = text.replace(f'"{instrumented_class}"', f'["{instrumented_class}", "{class_name}"]', 1)

file.seek(0)
file.write(text)


def main():
_test_vectorstore_modules_instrumented_ignored_classes = set(
[
"VectorStore", # Base class
"Zilliz", # Inherited from Milvus, which we are already instrumenting.
]
)

vector_store_class_directory = vectorstores._module_lookup
for class_name, directory in vector_store_class_directory.items():
class_ = getattr(vectorstores, class_name)
instrumented_class = VECTORSTORE_CLASSES.get(directory, None)

if (
not hasattr(class_, "similarity_search")
or class_name in _test_vectorstore_modules_instrumented_ignored_classes
):
continue

if not instrumented_class or class_name not in instrumented_class:
if class_name in vector_store_class_directory:
uninstrumented_directory = vector_store_class_directory[class_name]

# Add in newrelic/config.py if there is not an instrumented directory
# Otherwise, config already exists, so no need to duplicate it.
add_to_config(uninstrumented_directory, instrumented_class)

# Add in newrelic/hooks/mlmodel_langchain.py
add_to_hooks(class_name, uninstrumented_directory, instrumented_class)


if __name__ == "__main__":
main()
6 changes: 3 additions & 3 deletions tests/mlmodel_langchain/test_chain.py
Original file line number Diff line number Diff line change
@@ -525,7 +525,7 @@
"vendor": "langchain",
"ingest_source": "Python",
"virtual_llm": True,
"content": "{'input': 'math', 'context': [Document(metadata={}, page_content='What is 2 + 4?')]}",
"content": "{'input': 'math', 'context': [Document(id='1234', metadata={}, page_content='What is 2 + 4?')]}",
},
],
[
@@ -557,7 +557,7 @@
"ingest_source": "Python",
"is_response": True,
"virtual_llm": True,
"content": "{'input': 'math', 'context': [Document(metadata={}, page_content='What is 2 + 4?')], 'answer': '```html\\n<!DOCTYPE html>\\n<html>\\n<head>\\n <title>Math Quiz</title>\\n</head>\\n<body>\\n <h2>Math Quiz Questions</h2>\\n <ol>\\n <li>What is the result of 5 + 3?</li>\\n <ul>\\n <li>A) 7</li>\\n <li>B) 8</li>\\n <li>C) 9</li>\\n <li>D) 10</li>\\n </ul>\\n <li>What is the product of 6 x 7?</li>\\n <ul>\\n <li>A) 36</li>\\n <li>B) 42</li>\\n <li>C) 48</li>\\n <li>D) 56</li>\\n </ul>\\n <li>What is the square root of 64?</li>\\n <ul>\\n <li>A) 6</li>\\n <li>B) 7</li>\\n <li>C) 8</li>\\n <li>D) 9</li>\\n </ul>\\n <li>What is the result of 12 / 4?</li>\\n <ul>\\n <li>A) 2</li>\\n <li>B) 3</li>\\n <li>C) 4</li>\\n <li>D) 5</li>\\n </ul>\\n <li>What is the sum of 15 + 9?</li>\\n <ul>\\n <li>A) 22</li>\\n <li>B) 23</li>\\n <li>C) 24</li>\\n <li>D) 25</li>\\n </ul>\\n </ol>\\n</body>\\n</html>\\n```'}",
"content": "{'input': 'math', 'context': [Document(id='1234', metadata={}, page_content='What is 2 + 4?')], 'answer': '```html\\n<!DOCTYPE html>\\n<html>\\n<head>\\n <title>Math Quiz</title>\\n</head>\\n<body>\\n <h2>Math Quiz Questions</h2>\\n <ol>\\n <li>What is the result of 5 + 3?</li>\\n <ul>\\n <li>A) 7</li>\\n <li>B) 8</li>\\n <li>C) 9</li>\\n <li>D) 10</li>\\n </ul>\\n <li>What is the product of 6 x 7?</li>\\n <ul>\\n <li>A) 36</li>\\n <li>B) 42</li>\\n <li>C) 48</li>\\n <li>D) 56</li>\\n </ul>\\n <li>What is the square root of 64?</li>\\n <ul>\\n <li>A) 6</li>\\n <li>B) 7</li>\\n <li>C) 8</li>\\n <li>D) 9</li>\\n </ul>\\n <li>What is the result of 12 / 4?</li>\\n <ul>\\n <li>A) 2</li>\\n <li>B) 3</li>\\n <li>C) 4</li>\\n <li>D) 5</li>\\n </ul>\\n <li>What is the sum of 15 + 9?</li>\\n <ul>\\n <li>A) 22</li>\\n <li>B) 23</li>\\n <li>C) 24</li>\\n <li>D) 25</li>\\n </ul>\\n </ol>\\n</body>\\n</html>\\n```'}",
},
],
]
@@ -1814,7 +1814,7 @@ def _test():
@background_task()
def test_retrieval_chains(set_trace_info, retrieval_chain_prompt, embedding_openai_client, chat_openai_client):
set_trace_info()
documents = [langchain_core.documents.Document(page_content="What is 2 + 4?")]
documents = [langchain_core.documents.Document(id="1234", page_content="What is 2 + 4?")]
vectordb = FAISS.from_documents(documents=documents, embedding=embedding_openai_client)
retriever = vectordb.as_retriever()
question_answer_chain = create_stuff_documents_chain(
3 changes: 2 additions & 1 deletion tests/mlmodel_langchain/test_vectorstore.py
Original file line number Diff line number Diff line change
@@ -83,6 +83,7 @@ def vectorstore_events_sans_content(event):
"ingest_source": "Python",
"metadata.source": os.path.join(os.path.dirname(__file__), "hello.pdf"),
"metadata.page": 0,
"metadata.page_label": "1",
},
),
]
@@ -91,7 +92,7 @@ def vectorstore_events_sans_content(event):
_test_vectorstore_modules_instrumented_ignored_classes = set(
[
"VectorStore", # Base class
"ElasticKnnSearch", # Deprecated, so we will not be instrumenting this.
"Zilliz", # Inherited from Milvus, which we are already instrumenting.
]
)

30 changes: 27 additions & 3 deletions tests/testing_support/db_settings.py
Original file line number Diff line number Diff line change
@@ -219,6 +219,32 @@ def mongodb_settings():
return settings


def cassandra_settings():
"""Return a list of dict of settings for connecting to cassandra.
Will return the correct settings, depending on which of the environments it
is running in. It attempts to set variables in the following order, where
later environments override earlier ones.
1. Local
2. Github Actions
"""

host = "host.docker.internal" if "GITHUB_ACTIONS" in os.environ else "localhost"
instances = 1
identifier = str(os.getpid())
settings = [
{
"host": host,
"port": 8080 + instance_num,
"keyspace": f"cassandra_keyspace_{identifier}",
"table_name": f"cassandra_table_{identifier}",
}
for instance_num in range(instances)
]
return settings


def firestore_settings():
"""Return a list of dict of settings for connecting to firestore.
@@ -234,9 +260,7 @@ def firestore_settings():

host = "host.docker.internal" if "GITHUB_ACTIONS" in os.environ else "127.0.0.1"
instances = 2
settings = [
{"host": host, "port": 8080 + instance_num} for instance_num in range(instances)
]
settings = [{"host": host, "port": 8080 + instance_num} for instance_num in range(instances)]
return settings


55 changes: 55 additions & 0 deletions tests/testing_support/http_client_recorder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from collections import namedtuple

from newrelic.common.agent_http import DeveloperModeClient
from newrelic.common.encoding_utils import json_encode

Request = namedtuple("Request", ("method", "path", "params", "headers", "payload"))


class HttpClientRecorder(DeveloperModeClient):
SENT = []
STATUS_CODE = None
STATE = 0

def send_request(
self,
method="POST",
path="/agent_listener/invoke_raw_method",
params=None,
headers=None,
payload=None,
):
request = Request(method=method, path=path, params=params, headers=headers, payload=payload)
self.SENT.append(request)
if self.STATUS_CODE:
# Define behavior for a 200 status code for use in test_agent_control_health.py
if self.STATUS_CODE == 200:
payload = {"return_value": "Hello World!"}
response_data = json_encode(payload).encode("utf-8")
return self.STATUS_CODE, response_data
return self.STATUS_CODE, b""

return super(HttpClientRecorder, self).send_request(method, path, params, headers, payload)

def __enter__(self):
HttpClientRecorder.STATE += 1

def __exit__(self, exc, value, tb):
HttpClientRecorder.STATE -= 1

def close_connection(self):
HttpClientRecorder.STATE -= 1
8 changes: 7 additions & 1 deletion tox.ini
Original file line number Diff line number Diff line change
@@ -44,6 +44,7 @@ setupdir = {toxinidir}
; Fail tests when interpreters are missing.
skip_missing_interpreters = false
envlist =
cassandra-datastore_cassandradriver-{py38,py39,py310,py311,py312,pypy310}-cassandralatest,
elasticsearchserver07-datastore_elasticsearch-{py37,py38,py39,py310,py311,py312,py313,pypy310}-elasticsearch07,
elasticsearchserver08-datastore_elasticsearch-{py37,py38,py39,py310,py311,py312,py313,pypy310}-elasticsearch08,
firestore-datastore_firestore-{py37,py38,py39,py310,py311,py312,py313},
@@ -138,7 +139,8 @@ envlist =
python-framework_graphql-py37-graphql{0301,0302},
python-framework_pyramid-{py37,py38,py39,py310,py311,py312,py313,pypy310}-Pyramidlatest,
python-framework_pyramid-{py37,py38,py39,py310,py311,py312,py313,pypy310}-Pyramid0110-cornice,
python-framework_sanic-{py37,py38,py39,py310,py311,py312,py313,pypy310}-saniclatest,
python-framework_sanic-{py37,py38}-sanic2406,
python-framework_sanic-{py39,py310,py311,py312,py313,pypy310}-saniclatest,
python-framework_sanic-{py38,pypy310}-sanic{200904,210300,2109,2112,2203,2290},
python-framework_starlette-{py310,pypy310}-starlette{0014,0015,0019,0028},
python-framework_starlette-{py37,py38,py39,py310,py311,py312,py313,pypy310}-starlettelatest,
@@ -255,6 +257,8 @@ deps =
datastore_aiomysql: aiomysql
datastore_aiomysql: cryptography
datastore_bmemcached: python-binary-memcached
datastore_cassandradriver-cassandralatest: cassandra-driver
datastore_cassandradriver-cassandralatest: twisted
datastore_elasticsearch: requests
datastore_elasticsearch-elasticsearch07: elasticsearch<8.0
datastore_elasticsearch-elasticsearch08: elasticsearch<9.0
@@ -359,6 +363,7 @@ deps =
framework_sanic-sanic2112: sanic<21.13
framework_sanic-sanic2203: sanic<22.4
framework_sanic-sanic2290: sanic<22.9.1
framework_sanic-sanic2406: sanic<24.07
framework_sanic-saniclatest: sanic
framework_sanic-sanic{200904,210300,2109,2112,2203,2290}: websockets<11
; For test_exception_in_middleware test, anyio is used:
@@ -473,6 +478,7 @@ changedir =
datastore_aiomysql: tests/datastore_aiomysql
datastore_asyncpg: tests/datastore_asyncpg
datastore_bmemcached: tests/datastore_bmemcached
datastore_cassandradriver: tests/datastore_cassandradriver
datastore_elasticsearch: tests/datastore_elasticsearch
datastore_firestore: tests/datastore_firestore
datastore_memcache: tests/datastore_memcache