Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: aws-samples/aws-cudos-framework-deployment
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 4.0.10
Choose a base ref
...
head repository: aws-samples/aws-cudos-framework-deployment
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 4.0.11
Choose a head ref

Commits on Jan 28, 2025

  1. Health events dash 2.1.0 (#1116)

    * Health Events Dashboard 2.1.0
    
    * Fix default last update days
    esc1144 authored Jan 28, 2025
    Copy the full SHA
    64f4e0e View commit details

Commits on Jan 29, 2025

  1. fixes for health (#1117)

    iakov-aws authored Jan 29, 2025
    Copy the full SHA
    2ba0ee9 View commit details
  2. cora v0.0.7 (#1119)

    yprikhodko authored Jan 29, 2025
    Copy the full SHA
    801efff View commit details
  3. Health events dash 2.1.0 - fix to QS export anomaly (#1120)

    * Health Events Dashboard 2.1.0
    
    * Fix default last update days
    
    * Correct QS anomaly
    
    * Fix for backversion of QuickSight in some regions
    esc1144 authored Jan 29, 2025
    Copy the full SHA
    9910baf View commit details
  4. Fixing existing behaviour for engine, engine version and cluster vers…

    …ion controls where selections are being retained across sheets, resulting in mixed values being displayed in controls. New individual parameters and controls are now defined for RDS engine and engine version, EKS cluster version, and OpenSearch engine version. Filters on each sheet are associated to their new corresponding parameters. Visual titles mentioning 'cost per year' are also adjusted to mention 'monthly cost' instead in alignment with recent changes for the dashboard.
    juchavw committed Jan 29, 2025
    Copy the full SHA
    faac64b View commit details

Commits on Jan 30, 2025

  1. DataTransfer Dashboard changes for CUR2 compatibility (#1118)

    chaitanyashah authored Jan 30, 2025
    Copy the full SHA
    969f9ba View commit details
  2. Merge pull request #1121 from aws-samples/fix/ext-supp-parameters-vis…

    …ual-titles-adjustment
    
    Fixing existing behaviour for parameters controls mixing values across sheets
    juchavw authored Jan 30, 2025
    Copy the full SHA
    2abd73d View commit details

Commits on Feb 6, 2025

  1. add lf support (#1122)

    iakov-aws authored Feb 6, 2025
    Copy the full SHA
    fa1aae0 View commit details
  2. add fuzzy input and refactor discovery

    * add fuzzy input
    
    *
    
    * fix tests
    
    * fix tests
    
    * refactor discovery
    
    * replace discover
    
    * refactoring: speed up working with dashboard
    
    * get rid of click
    
    * enforce all yes
    
    * fix deletion
    
    * fix-yes-no
    
    * allow cur1 in dependency
    iakov-aws authored Feb 6, 2025
    Copy the full SHA
    5407ef9 View commit details

Commits on Feb 13, 2025

  1. Rss feed (#1127)

    * Switch to FOCUS 1.0 GA table
    
    * Update focus.yaml
    
    * bug fixes
    
    * update version format
    
    * fixes to pass RSS validations
    
    * Add CUDOS v5.5
    
    * new changes
    
    * date bump
    
    * fix formatting
    
    * fix formatting
    
    * self ref update
    
    * formatting fix
    
    * bump date
    
    * add publish
    
    * test update
    
    * Update publish-rss.yml
    
    Update creds
    
    * Update publish-rss.yml
    
    ---------
    
    Co-authored-by: Yuriy Prykhodko <yprikhodko@gmail.com>
    Co-authored-by: yuriypr <yuriypr@amazon.lu>
    3 people authored Feb 13, 2025
    Copy the full SHA
    dabf637 View commit details
  2. Update publish-rss.yml (#1128)

    * Update publish-rss.yml
    
    * Update publish-rss.yml
    yprikhodko authored Feb 13, 2025
    Copy the full SHA
    76f936f View commit details

Commits on Feb 14, 2025

  1. Update cloud-intelligence-dashboards.rss (#1131)

    yprikhodko authored Feb 14, 2025
    Copy the full SHA
    cf63689 View commit details

Commits on Feb 26, 2025

  1. ECS feature added to Compute Optimizer (#1133)

    * ECS feature added to Compute Optimizer
    
    * adding ecs
    VoicuAWS authored Feb 26, 2025
    Copy the full SHA
    077e97b View commit details

Commits on Feb 27, 2025

  1. minor logging fixes (#1136)

    VoicuAWS authored Feb 27, 2025
    Copy the full SHA
    d366374 View commit details

Commits on Mar 3, 2025

  1. updating connect dashboard version to v1.1.0

    Alex Yankovskyy committed Mar 3, 2025
    Copy the full SHA
    c3ded4c View commit details

Commits on Mar 4, 2025

  1. fix qs export issues

    iakov-aws committed Mar 4, 2025
    Copy the full SHA
    c52b34a View commit details
  2. updating connect dashboard version to v1.1.0 04/03

    Alex Yankovskyy committed Mar 4, 2025
    Copy the full SHA
    bfdfcac View commit details
  3. updating connect dashboard version to v1.1.0 04/03

    Alex Yankovskyy committed Mar 4, 2025
    Copy the full SHA
    e92026f View commit details
  4. updating connect dashboard version to v1.1.0 04/03

    Alex Yankovskyy committed Mar 4, 2025
    Copy the full SHA
    7c4796a View commit details
  5. updating connect dashboard version to v1.1.0 04/03

    Alex Yankovskyy committed Mar 4, 2025
    Copy the full SHA
    b621139 View commit details
  6. updating changelog for connect_v1.1.0

    Alex Yankovskyy committed Mar 4, 2025
    Copy the full SHA
    4c39810 View commit details
  7. Merge pull request #1138 from aws-samples/connect_v1.1.0

    updating amazon connect dashboard version to v1.1.0
    AleksYan authored Mar 4, 2025
    Copy the full SHA
    90bf8d0 View commit details

Commits on Mar 12, 2025

  1. updating filter settings for DailyUsage>InboundMins (#1140)

    AleksYan authored Mar 12, 2025
    Copy the full SHA
    39e7d88 View commit details
  2. Adding Pagination for cid-helper-quicksight-list_groups (#1143)

    SohamMajumder authored Mar 12, 2025
    Copy the full SHA
    fc50461 View commit details

Commits on Mar 18, 2025

  1. fix release for cfn (#1123)

    iakov-aws authored Mar 18, 2025
    Copy the full SHA
    c294ff8 View commit details
  2. fix export cur to support mutidb (#1083)

    iakov-aws authored Mar 18, 2025
    Copy the full SHA
    8e66cd9 View commit details

Commits on Mar 19, 2025

  1. Add parameter needed for CORA dashboard (#1147)

    petrokashlikov authored Mar 19, 2025
    Copy the full SHA
    9d2b974 View commit details

Commits on Mar 21, 2025

  1. release 4.0.11 (#1148)

    iakov-aws authored Mar 21, 2025
    Copy the full SHA
    3863ca7 View commit details
Showing with 9,424 additions and 4,420 deletions.
  1. +30 −0 .github/workflows/publish-rss.yml
  2. +2 −2 assets/publish_lambda_layer.sh
  3. +1 −2 cfn-templates/cid-admin-policies.yaml
  4. +120 −53 cfn-templates/cid-cfn.yml
  5. +22 −0 cfn-templates/cid-lakeformation-prerequisite.yaml
  6. +1 −0 cfn-templates/cid-plugin.yml
  7. +14 −3 changes/CHANGELOG-amazon-connect.md
  8. +3 −0 changes/CHANGELOG-cod.md
  9. +4 −0 changes/CHANGELOG-cora.md
  10. +16 −0 changes/CHANGELOG-extended-support-cost-projection.md
  11. +5 −0 changes/CHANGELOG-hed.md
  12. +62 −7 changes/cloud-intelligence-dashboards.rss
  13. +1 −1 cid/_version.py
  14. +3 −0 cid/builtin/core/data/queries/co/all_options.sql
  15. +485 −0 cid/builtin/core/data/queries/co/ecs_service.json
  16. +271 −0 cid/builtin/core/data/queries/co/ecs_service_options.sql
  17. +16 −2 cid/builtin/core/data/resources.yaml
  18. +30 −42 cid/common.py
  19. +31 −26 cid/export.py
  20. +10 −9 cid/helpers/athena.py
  21. +14 −8 cid/helpers/cur.py
  22. +1 −1 cid/helpers/glue.py
  23. +1 −1 cid/helpers/iam.py
  24. +103 −217 cid/helpers/quicksight/__init__.py
  25. +212 −28 cid/helpers/quicksight/dashboard.py
  26. +0 −2 cid/helpers/quicksight/resource.py
  27. +4 −1 cid/test/bats/10-deploy-update-delete/cudos.bats
  28. +36 −20 cid/utils.py
  29. +7,006 −3,377 dashboards/amazon-connect/amazon-connect.yaml
  30. +224 −215 dashboards/cora/cora.yaml
  31. +35 −12 dashboards/data-transfer/DataTransfer-Cost-Analysis-Dashboard.yaml
  32. +264 −212 dashboards/extended-support-cost-projection/extended-support-cost-projection.yaml
  33. +395 −177 dashboards/health-events/health-events.yaml
  34. +1 −1 requirements.txt
  35. +1 −1 setup.cfg
30 changes: 30 additions & 0 deletions .github/workflows/publish-rss.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: Sync RSS to S3

on:
push:
branches:
- main
paths:
- 'changes/cloud-intelligence-dashboards.rss'

jobs:
sync-to-s3:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read

steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v3
with:
role-to-assume: ${{ secrets.AWS_RSS_ROLE }}
role-session-name: ${{ secrets.AWS_RSS_SESSION_NAME }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Copy RSS file to S3
run: |
ls
aws s3 cp changes/cloud-intelligence-dashboards.rss s3://cid-feed/feed/cloud-intelligence-dashboards.rss
4 changes: 2 additions & 2 deletions assets/publish_lambda_layer.sh
Original file line number Diff line number Diff line change
@@ -37,9 +37,9 @@ rm -vf ./$layer
# Publish cfn (only works for the release)
if aws s3 ls "s3://aws-managed-cost-intelligence-dashboards" >/dev/null 2>&1; then
echo "Updating cid-cfn.yml"
aws s3 sync ./cfn-templates/ s3://aws-managed-cost-intelligence-dashboards/cfn/ --exclude 'cfn-templates/cur-aggregation.yaml' --exclude 'cfn-templates/data-exports-aggregation.yaml'
aws s3 sync ./cfn-templates/ s3://aws-managed-cost-intelligence-dashboards/cfn/ --exclude './cfn-templates/cur-aggregation.yaml' --exclude './cfn-templates/data-exports-aggregation.yaml'
# Publish additional copy into respective version folder
aws s3 sync ./cfn-templates/ "s3://aws-managed-cost-intelligence-dashboards/cfn/${CID_VERSION}/" --exclude 'cfn-templates/cur-aggregation.yaml' --exclude 'cfn-templates/data-exports-aggregation.yaml'
aws s3 sync ./cfn-templates/ "s3://aws-managed-cost-intelligence-dashboards/cfn/${CID_VERSION}/" --exclude './cfn-templates/cur-aggregation.yaml' --exclude './cfn-templates/data-exports-aggregation.yaml'

echo "Syncing dashboards"
aws s3 sync ./dashboards s3://aws-managed-cost-intelligence-dashboards/hub/
3 changes: 1 addition & 2 deletions cfn-templates/cid-admin-policies.yaml
Original file line number Diff line number Diff line change
@@ -455,9 +455,8 @@ Resources:
- quicksight:DeleteRefreshSchedule
- quicksight:DescribeRefreshSchedule
- quicksight:ListRefreshSchedules
- quicksight:CreateDataSetRefreshProperties
- quicksight:PutDataSetRefreshProperties
- quicksight:DescribeDataSetRefreshProperties
- quicksight:UpdateDataSetRefreshProperties
- quicksight:DeleteDataSetRefreshProperties
Effect: Allow
Resource:
173 changes: 120 additions & 53 deletions cfn-templates/cid-cfn.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
AWSTemplateFormatVersion: '2010-09-09'
Description: Deployment of Cloud Intelligence Dashboards v4.0.10
Description: Deployment of Cloud Intelligence Dashboards v4.0.11
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
@@ -281,7 +281,7 @@ Conditions:
- !Equals [ !Ref DeployCUDOSv5, "yes" ]
- !Equals [ !Ref DeployCostIntelligenceDashboard, "yes" ]
- !Equals [ !Ref DeployKPIDashboard, "yes" ]
NeedLakeFormationCrawlerPermissions:
NeedLakeFormationAndCURTable:
Fn::And:
- !Equals [ !Ref LakeFormationEnabled, "yes" ]
- !Condition NeedCURTable
@@ -1523,111 +1523,177 @@ Resources:
reason: "Need explicit name to give permissions"


#################################### START LF BLOCK

DataLakeSettingsCidExecRolePerm:
Type: AWS::LakeFormation::Permissions
LakeFormationTagsForDatabase:
Type: AWS::LakeFormation::TagAssociation
Condition: NeedLakeFormationEnabled
Properties:
DataLakePrincipal:
Resource:
Database:
CatalogId: !Ref "AWS::AccountId"
Name: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
LFTags:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue
CatalogId: !Ref "AWS::AccountId"
LakeFormationTagsForCurTable:
Type: AWS::LakeFormation::TagAssociation
Condition: NeedLakeFormationAndCURTable
Properties:
Resource:
Table:
CatalogId: !Ref "AWS::AccountId"
DatabaseName: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
Name: !Ref MyCURTable
LFTags:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue
CatalogId: !Ref "AWS::AccountId"

DataLakeCidExecRolePermDatabase:
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationEnabled
Properties:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !GetAtt CidExecRole.Arn
Permissions:
- ALL
Resource:
DatabaseResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
Name: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
DataLakeSettingsCidExecRolePermTable:
Type: AWS::LakeFormation::Permissions
ResourceType: DATABASE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue
DataLakeCidExecRolePermTable:
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationEnabled
Properties:
DataLakePrincipal:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !GetAtt CidExecRole.Arn
Permissions:
- ALL
Resource:
TableResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
DatabaseName: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
TableWildcard: {}

DataLakeSettingsQuickSightDataSourceRolePerm:
Type: AWS::LakeFormation::Permissions
ResourceType: TABLE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue

DataLakeQuickSightDataSourceRolePermDatabase:
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationEnabledQuickSightDataSourceRole
Properties:
DataLakePrincipal:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !GetAtt QuickSightDataSourceRole.Arn
Permissions:
- ALL
Resource:
DatabaseResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
Name: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
DataLakeSettingsQuickSightDataSourceRolePermTable:
Type: AWS::LakeFormation::Permissions
ResourceType: DATABASE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue
DataLakeQuickSightDataSourceRolePermTable:
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationEnabledQuickSightDataSourceRole
Properties:
DataLakePrincipal:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !GetAtt QuickSightDataSourceRole.Arn
Permissions:
- ALL
Resource:
TableResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
DatabaseName: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
TableWildcard: {}

DataLakeSettingsCidCrawlerRolePerm:
Type: AWS::LakeFormation::Permissions
Condition: NeedLakeFormationCrawlerPermissions
ResourceType: TABLE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue

DataLakeCidCURCrawlerRolePermDatabase:
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationAndCURTable
Properties:
DataLakePrincipal:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !GetAtt CidCURCrawlerRole.Arn
Permissions:
- ALL
Resource:
DatabaseResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
Name: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
DataLakeSettingsCidCrawlerRolePermTable:
Type: AWS::LakeFormation::Permissions
Condition: NeedLakeFormationCrawlerPermissions
ResourceType: DATABASE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue
DataLakeCidCURCrawlerRolePermTable:
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationAndCURTable
Properties:
DataLakePrincipal:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !GetAtt CidCURCrawlerRole.Arn
Permissions:
- ALL
Resource:
TableResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
DatabaseName: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
TableWildcard: {}

DataLakeSettingQuickSightAdminUserPerm:
Type: AWS::LakeFormation::Permissions
ResourceType: TABLE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue

DataLakeDefaultQSUserPermDatabase: # only needed if default QS role is used, but wont hurt to duplicate if CX will create something new
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationEnabled
Properties:
DataLakePrincipal:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !Sub 'arn:${AWS::Partition}:quicksight:${Setup.IdentityRegion}:${AWS::AccountId}:user/default/${QuickSightUser}'
Permissions:
- ALL
Resource:
DatabaseResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
Name: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
DataLakeSettingQuickSightAdminUserPermTable:
Type: AWS::LakeFormation::Permissions
ResourceType: DATABASE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue
DataLakeDefaultQSUserPermTable:
Type: AWS::LakeFormation::PrincipalPermissions
Condition: NeedLakeFormationEnabled
Properties:
DataLakePrincipal:
PermissionsWithGrantOption: []
Principal:
DataLakePrincipalIdentifier: !Sub 'arn:${AWS::Partition}:quicksight:${Setup.IdentityRegion}:${AWS::AccountId}:user/default/${QuickSightUser}'
Permissions:
- ALL
Resource:
TableResource:
LFTagPolicy:
CatalogId: !Ref "AWS::AccountId"
DatabaseName: !If [NeedDatabase, !Ref CidDatabase, !Ref DatabaseName ]
TableWildcard: {}
ResourceType: TABLE
Expression:
- TagKey: !ImportValue cid-LakeFormation-TagKey
TagValues:
- !ImportValue cid-LakeFormation-TagValue

#################################### END OF LF BLOCK

KmsPolicyForCidExecRole:
Type: AWS::IAM::Policy
@@ -1780,7 +1846,7 @@ Resources:
- LambdaLayerBucketPrefixIsManaged
- !FindInMap [RegionMap, !Ref 'AWS::Region', BucketName]
- !Sub '${LambdaLayerBucketPrefix}-${AWS::Region}' # Region added for backward compatibility
S3Key: 'cid-resource-lambda-layer/cid-4.0.10.zip' #replace version here if needed
S3Key: 'cid-resource-lambda-layer/cid-4.0.11.zip' #replace version here if needed
CompatibleRuntimes:
- python3.10
- python3.11
@@ -1868,7 +1934,8 @@ Resources:
view-compute-optimizer-ebs-volume-lines-s3FolderPath: !Sub '${OptimizationDataCollectionBucketPath}/compute_optimizer/compute_optimizer_ebs_volume'
view-compute-optimizer-auto-scale-lines-s3FolderPath: !Sub '${OptimizationDataCollectionBucketPath}/compute_optimizer/compute_optimizer_auto_scale'
view-compute-optimizer-ec2-instance-lines-s3FolderPath: !Sub '${OptimizationDataCollectionBucketPath}/compute_optimizer/compute_optimizer_ec2_instance'
view-compute-optimizer-rds-instance-lines-s3FolderPath: !Sub '${OptimizationDataCollectionBucketPath}/compute_optimizer/compute_optimizer_rds_instance'
view-compute-optimizer-rds-database-lines-s3FolderPath: !Sub '${OptimizationDataCollectionBucketPath}/compute_optimizer/compute_optimizer_rds_database'
view-compute-optimizer-ecs-service-lines-s3FolderPath: !Sub '${OptimizationDataCollectionBucketPath}/compute_optimizer_ecs_service'
dataset-compute-optimizer-all-options-primary-tag-name: !Sub '${PrimaryTagName}'
dataset-compute-optimizer-all-options-secondary-tag-name: !Sub '${SecondaryTagName}'

22 changes: 22 additions & 0 deletions cfn-templates/cid-lakeformation-prerequisite.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
AWSTemplateFormatVersion: '2010-09-09'
Description: 'CID LakeFormation Prerequisite Stack v0.0.1'

Resources:
LakeFormationTag:
Type: AWS::LakeFormation::Tag
Properties:
CatalogId: !Ref 'AWS::AccountId'
TagKey: CidAssetsAccess
TagValues:
- Allow
- Deny

Outputs:
CidLakeFormationTagKey:
Description: Technical Value - CidExecArn
Value: CidAssetsAccess
Export: { Name: 'cid-LakeFormation-TagKey'}
CidLakeFormationTagValue:
Description: Technical Value - CidExecArn
Value: Allow
Export: { Name: 'cid-LakeFormation-TagValue'}
1 change: 1 addition & 0 deletions cfn-templates/cid-plugin.yml
Original file line number Diff line number Diff line change
@@ -36,6 +36,7 @@ Resources:
Dashboard:
dashboard-id: !Ref DashboardId
account-map-source: 'dummy'
account-map-database-name: {'Fn::ImportValue': "cid-CidDatabase"}
data-collection-database-name: 'optimization_data'
resources: !If [ResourcesUrlIsEmpty, !Ref 'AWS::NoValue', !Ref ResourcesUrl]
data_exports_database_name: !If [RequiresDataExports, {'Fn::ImportValue': "cid-DataExports-Database"}, !Ref 'AWS::NoValue']
17 changes: 14 additions & 3 deletions changes/CHANGELOG-amazon-connect.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,21 @@
# What's new in Amazon Connect Dashboard

## Amazon Connect Dashboard - v1.1.1
* minor fixes

## Amazon Connect Dashboard - v1.1.0
* Removed link to deprecated feedback form
* new visual on MoM Connect usage trends
* new tab Contact Center to track other services on Connect accounts
* new filter (slicer) to find calls with cost in defined range
* new call distribution per cost bins visual
* removed link to deprecated feedback form
* added recommendations to enable granular billing
* added description on cost and charge types
* added explanation on average unit price
* minor fixes

## Amazon Connect Dashboard - v1.0.1
* Minor bugfixes
* minor bugfixes

## Amazon Connect Dashboard - v1.0.0
* Initial release
* initial release
3 changes: 3 additions & 0 deletions changes/CHANGELOG-cod.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# What's new in the Compute Optimizer Dashboard (COD)

## Compute Optimizer Dashboard - v4.0.0
* Added ECS Compute Optimizer sheets

## Compute Optimizer Dashboard - v3.1.0
* Removed link to deprecated feedback form

4 changes: 4 additions & 0 deletions changes/CHANGELOG-cora.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# What's new in the CORA

## CORA - v0.0.7
* Added Support of Idle recommendations
* Added Resource Id filter on Usage Optimization tab

## CORA - v0.0.6
* Minor fixes
* Added Resource Id filter
16 changes: 16 additions & 0 deletions changes/CHANGELOG-extended-support-cost-projection.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,21 @@
# What's new in Extended Support Cost Projection

## Extended Support Cost Projection - v4.0.2

**Important:** This version requires the data collection version 3.2.0+. Update to this version requires a forced and recursive update.

If you have modified the Extended Support Cost Projection dashboard visuals, these changes will be overridden when the dashboard is updated. Consider backing-up the existing dashboard by creating an analysis from it if you want to keep a reference to customised visuals so you can re-apply them after the update takes place.

To update run these commands in your CloudShell (recommended) or other terminal:

```
python3 -m ensurepip --upgrade
pip3 install --upgrade cid-cmd
cid-cmd update --dashboard-id extended-support-cost-projection
```

- Fixing existing behaviour for engine, engine version and cluster version controls where selections are being retained across sheets, resulting in mixed values being displayed in controls. New individual parameters and controls are now defined for RDS engine and engine version, EKS cluster version, and OpenSearch engine version. Filters on each sheet are associated to their new corresponding parameters.

## Extended Support Cost Projection - v4.0.1

**Important:** This version requires the data collection version 3.2.0+. Update to this version requires a forced and recursive update.
5 changes: 5 additions & 0 deletions changes/CHANGELOG-hed.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# What's new in Health Events Dashboard (HED)
## v2.1.0
* Modified Athena query to include events ingested more than 90 days ago if they are not of closed status. Although not a breaking change for the dasboard, you should update with the `--force --recursive` flags to incorporate it.
* Added guidance text for date range filtering
* Minor cosmetic and usability changes

## v2.0.4
* Fix resetting description free text filter issue
* Minor cosmetic and usability changes
69 changes: 62 additions & 7 deletions changes/cloud-intelligence-dashboards.rss
Original file line number Diff line number Diff line change
@@ -1,27 +1,78 @@
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
xmlns:atom="http://www.w3.org/1999/xhtml"
xmlns:atom="http://www.w3.org/2005/Atom"
xmlns:content="http://purl.org/rss/1.0/modules/content/">
<channel>
<title>AWS Cloud Intelligence Dashboards</title>
<link>https://catalog.workshops.aws/awscid/en-US</link>
<content:encoded>The Cloud Intelligence Dashboards is an open-source framework from AWS to get high-level and granular insight into their cost and usage data</content:encoded>
<atom:link href="https://github.com/aws-samples/aws-cudos-framework-deployment/tree/main/changes/cloud-intelligence-dashboards.rss"
<description>The Cloud Intelligence Dashboards is an open-source framework from AWS to get high-level and granular insight into their cost and usage data</description>
<atom:link href="https://raw.githubusercontent.com/aws-samples/aws-cudos-framework-deployment/refs/heads/rss_feed/changes/cloud-intelligence-dashboards.rss"
rel="self"
type="application/rss+xml"/>
<lastBuildDate>Wed, 19 Jun 2024 07:15:09 GMT</lastBuildDate>
<lastBuildDate>Fri, 31 Jan 2025 12:00:00 GMT</lastBuildDate>
<language>en-us</language>

<item>
<title>Extended Support Cost Projection Dashboard - v4.0.0</title>
<link>https://catalog.workshops.aws/awscid/en-US/dashboards/advanced/extended-support-cost-projection</link>
<pubDate>23 Jan 2025 12:00:00 GMT</pubDate>
<category><![CDATA[Update]]></category>
<guid isPermaLink="false">https://github.com/aws-samples/aws-cudos-framework-deployment/blob/rss_feed/changes/CHANGELOG-extended-support-cost-projection.md#extended-support-cost-projection---v310</guid>
<description>Extended Support Cost Projection Dashboard update to v4.0.0</description>
<content:encoded><![CDATA[
In this release we have introduced projection of extended support costs for Amazon OpenSearch. The dashboard provides a clear view on OpenSearch and ElasticSearch domains reaching extended support in the next 3, 6, 12 months, and beyond.
Important: This version requires the data collection version 3.2.0+. Update to this version requires a forced and recursive update. If you have modified the Extended Support Cost Projection dashboard view queries, they will be overridden when the dashboard is updated. Consider backing-up the existing view queries if they contain custom changes you want to keep so you can re-apply them after the update takes place.
To update run these commands in your CloudShell (recommended) or other terminal:
pip3 install --upgrade cid-cmd
cid-cmd update --dashboard-id extended-support-cost-projection --force --recursive
]]>
</content:encoded>
</item>


<item>
<title>CUDOS update v5.5</title>
<link>https://catalog.workshops.aws/awscid/en-US/dashboards/foundational/cudos-cid-kpi</link>
<pubDate>29 Nov 2025 12:00:00 GMT</pubDate>
<category><![CDATA[Update]]></category>
<guid isPermaLink="false">https://github.com/aws-samples/aws-cudos-framework-deployment/blob/main/changes/CHANGELOG-cudos.md#cudos---55</guid>
<description>CUDOS update to v5.5</description>
<content:encoded><![CDATA[
* DynamoDB: Refactored visuals to improve user experience and simplify navigation. 'DynamoDB Accounts by Category' visuals replaced with 'DynamoDB Cost per Account' and 'DynamoDB Cost per Usage Type Group'
* DynamoDB: New section 'DynamoDB Provisioned Capacity - Reservations Savings Estimation' which allows to calculate estimated savings for Amazon DynamoDB reserved capacity based on custom commitment amount.
* DynamoDB: New section 'DynamoDB Provisioned Capacity - Reservation Coverage and Usage Monitoring' which allows monitor reserved capacity coverage per region and capacity type dimensions
* DynamoDB: New visual 'Infrequent Access Tables Cost and Cost Efficiency Gain/Loss vs Standard Storage' which shows efficiency gains from Infrequent Access tables and also tables which could be migrated to Standard table class
* Monitoring and Observability: New section 'AWS Config Periodic Recording Savings Opportunities' which shows potential savings from migration to periodic configuration item recording
* Monitoring and Observability: New section 'Account and Regions without AWS Config' which allows to identify account and regions with AWS service usage and where AWS Config is not enabled
* Analytics: Improved 'QuickSight Usage Type Group' calculated field to accommodate the latest QuickSight pricing constructs
* AI/ML: Added Guardrails to the 'Bedrock UsageType Group' calculated field
* Databases: Updated action filter on 'RI Coverage per region | engine | instance type or family' to be applied to 'Top 20 instances' allowing to focus on resources which belong to particular RI dimension
* Compute: Bug fix for 'EKS Extended Support Cost per Account' visual. Added missing filter for Last 30 days.
]]>
</content:encoded>
</item>
<item>
<title>SCAD Containers Cost Allocation Dashboard update to v1.0.0</title>
<link>https://catalog.workshops.aws/awscid/en-US/dashboards/additional/scad-containers-cost-allocation</link>
<pubDate>Thu, 25 Apr 2024 12:00:00 GMT</pubDate>
<category><![CDATA[Update]]></category>
<guid isPermaLink="false">https://github.com/aws-samples/aws-cudos-framework-deployment/blob/main/changes/CHANGELOG-scad-cca.md#scad-containers-cost-allocation-dashboard---v100</guid>
<description>SCAD Containers Cost Allocation Dashboard update to v1.0.0</description>
<content:encoded>Added support to view Net Amortized Cost in "View Cost As" control in all sheets. Removed "Exclude last 1 month" from all date range controls to prevent "No Data" (because Split Cost Allocation Data for EKS starts filling data only in current month)

Fixed issue where all split cost and usage metrics were lower than they should be, for pods on EC2 instances that were running for less than a full hour

Fixed aggregation issues for usage metrics in Athena views</content:encoded>
</item>

@@ -31,6 +82,7 @@
<pubDate>Thu, 25 Apr 2024 12:00:00 GMT</pubDate>
<category><![CDATA[Update]]></category>
<guid isPermaLink="false">https://github.com/aws-samples/aws-cudos-framework-deployment/blob/main/changes/CHANGELOG-cudos.md#cudos---54</guid>
<description>CUDOS update to v5.4</description>
<content:encoded>CUDOS is an in-depth, granular, and recommendation-driven dashboard to help customers dive deep into cost and usage. [new tab] Security: Introducing Security tab with cost and usage details for Security services. New tab includes visuals 'Security Spend per Service', 'Security Spend per Account' and respective detailed view sections for Amazon Cognito and Amazon GuardDuty
Security: New Amazon Cognito section with visuals 'Amazon Cognito Spend and Projected Cost for M2M App Clients and Tokens', 'Amazon Cognito Spend and Projected Cost for M2M App Clients and Tokens per Account' and 'Amazon Cognito Detailed View'
Security: New Amazon GuardDuty section with visuals 'Amazon GuardDuty Spend per UsageType', 'Amazon GuardDuty Spend per Account' and 'Accounts and Regions where Amazon GuardDuty is not enabled'
@@ -44,6 +96,7 @@ DynamoDB: Improved 'TOP 15 Candidates for Infrequent Access Tables Last 30 Days'
<pubDate>Thu, 25 Apr 2024 12:00:00 GMT</pubDate>
<category><![CDATA[Update]]></category>
<guid isPermaLink="false">https://github.com/aws-samples/aws-cudos-framework-deployment/blob/main/changes/CHANGELOG-hed.md#v200</guid>
<description>Health Events Dashboard v2.0.0</description>
<content:encoded>AWS Health integrates with 200+ AWS services to aggregate important information in a timely manner. AWS Health notifies you about service events, planned changes, and other account matters to help you manage your resources and take actions where necessary. Reorganized Summary tab for better flow and easy creation of targeted inventory reports of impacted resources
Requires update with cid-cli parameters: --force --recursive.</content:encoded>
</item>
@@ -54,6 +107,7 @@ Requires update with cid-cli parameters: --force --recursive.</content:encoded>
<pubDate>Thu, 25 Apr 2024 12:00:00 GMT</pubDate>
<category><![CDATA[Update]]></category>
<guid isPermaLink="false">https://github.com/awslabs/cid-framework/releases/tag/3.3.2</guid>
<description>CID Data Collection Framework update to v3.3.2</description>
<content:encoded>DCF centralizes data from the AWS Organization including compute, storage, databases, analytics, networking,
mobile, developer tools, management tools, security and enterprise applications.
CHANGES: Health Events metadata schema issues - force JSON to string by @esc1144 in #192
@@ -63,11 +117,12 @@ increase MemorySize of inventory-lambda by @habibmasri in #202</content:encoded>
</item>

<item>
<title>SCAD Containers Cost Allocation Dashboard update to v0.0.1</title>
<title>New SCAD Containers Cost Allocation Dashboard</title>
<link>https://catalog.workshops.aws/awscid/en-US/dashboards/additional/scad-containers-cost-allocation</link>
<pubDate>Thu, 25 Apr 2024 12:00:00 GMT</pubDate>
<category><![CDATA[New dashboard]]></category>
<guid isPermaLink="false">https://github.com/aws-samples/aws-cudos-framework-deployment/blob/main/changes/CHANGELOG-scad-cca.md#scad-containers-cost-allocation-dashboard---v001</guid>
<description>Introducing SCAD Containers Cost Allocation Dashboard</description>
<content:encoded>Initial release&lt;br&gt; &lt;br&gt; Released to GA.&lt;br&gt; &lt;br&gt; -------</content:encoded>
</item>

2 changes: 1 addition & 1 deletion cid/_version.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
__version__ = '4.0.10'
__version__ = '4.0.11'

3 changes: 3 additions & 0 deletions cid/builtin/core/data/queries/co/all_options.sql
Original file line number Diff line number Diff line change
@@ -18,4 +18,7 @@ UNION SELECT *
UNION SELECT *
FROM
compute_optimizer_rds_storage_options
UNION SELECT *
FROM
compute_optimizer_ecs_service_options
)
485 changes: 485 additions & 0 deletions cid/builtin/core/data/queries/co/ecs_service.json

Large diffs are not rendered by default.

271 changes: 271 additions & 0 deletions cid/builtin/core/data/queries/co/ecs_service_options.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,271 @@
CREATE OR REPLACE VIEW compute_optimizer_ecs_service_options AS
(
SELECT
TRY("date_parse"(lastrefreshtimestamp_utc, '%Y-%m-%dT%H:%i:%s.%fZ')) lastrefreshtimestamp_utc
, accountid accountid
, servicearn arn
, TRY("split_part"(servicearn, ':', 4)) region
, TRY("split_part"(servicearn, ':', 3)) service
, TRY("split_part"(servicearn, '/', 3)) name
, 'ecs_service' module
, effectiverecommendationpreferencessavingsestimationmodesource recommendationsourcetype
, finding finding
, CONCAT(
(CASE WHEN (findingreasoncodes_iscpuoverprovisioned = 'TRUE') THEN 'CPU-Over ' ELSE '' END),
(CASE WHEN (findingreasoncodes_iscpuunderprovisioned = 'TRUE') THEN 'CPU-Under ' ELSE '' END),
(CASE WHEN (findingreasoncodes_ismemoryoverprovisioned = 'TRUE') THEN 'Memory-Over ' ELSE '' END),
(CASE WHEN (findingreasoncodes_ismemoryunderprovisioned = 'TRUE') THEN 'Memory-Under ' ELSE '' END)
) reason
, lookbackperiodindays lookbackperiodindays
, currentperformancerisk as currentperformancerisk
, errorcode errorcode
, errormessage errormessage
, '' ressouce_details
, CONCAT(
utilizationmetrics_cpu_maximum, ';',
CAST(TRY(CAST(utilizationmetrics_cpu_maximum AS double)/CAST(currentserviceconfiguration_cpu AS double)) as varchar), ';',
CAST(TRY(CAST(utilizationmetrics_cpu_maximum AS double)/CAST(recommendationoptions_1_cpu AS double)) as varchar), ';',
utilizationmetrics_memory_maximum, ';',
CAST(TRY((CAST(utilizationmetrics_memory_maximum AS double))/TRY(CAST(currentserviceconfiguration_memory AS double))) as varchar), ';',
CAST(TRY((CAST(utilizationmetrics_memory_maximum AS double))/TRY(CAST(recommendationoptions_1_memory AS double))) as varchar), ';',
'', ';',
'', ';',
'', ';',
''
) utilizationmetrics
, 'Current' option_name
, CONCAT(
currentserviceconfiguration_cpu, ';',
currentserviceconfiguration_memory
) option_from
, '' option_to
, recommendationoptions_1_estimatedmonthlysavings_currency currency
, 0E0 monthlyprice
, 0E0 hourlyprice
, 0E0 estimatedmonthlysavings_value
, 0E0 estimatedmonthly_ondemand_cost_change
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as max_estimatedmonthlysavings_value_very_low
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as max_estimatedmonthlysavings_value_low
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as max_estimatedmonthlysavings_value_medium
, CONCAT(
CONCAT(TRY("split_part"(servicearn, '/', 2)), ';'),
CONCAT(COALESCE(recommendations_count, ''), ';'),
'', ';',
'', ';',
'', ';',
CONCAT(COALESCE(TRY("split_part"(currentserviceconfiguration_taskdefinitionarn, '/', 2)), ''), ';'),
CONCAT(COALESCE(currentserviceconfiguration_taskdefinitionarn, ''), ';'),
CONCAT(COALESCE(currentperformancerisk, ''), ';'),
CONCAT(COALESCE(currentserviceconfiguration_autoscalingconfiguration, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_1_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_2_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_3_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_4_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_5_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_6_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_7_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_8_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_9_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_10_containername, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_1_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_1_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_2_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_2_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_3_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_3_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_4_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_4_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_5_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_5_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_6_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_6_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_7_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_7_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_8_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_8_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_9_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_9_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_10_memory, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_10_memoryreservation, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_1_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_2_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_3_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_4_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_5_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_6_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_7_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_8_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_9_cpu, ''), ';'),
CONCAT(COALESCE(currentservicecontainerconfiguration_10_cpu, ''), ';'),
CONCAT('', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';')
) option_details
, tags tags
FROM
compute_optimizer_ecs_service_lines
WHERE (servicearn LIKE '%arn:%')
UNION SELECT
TRY("date_parse"(lastrefreshtimestamp_utc, '%Y-%m-%dT%H:%i:%s.%fZ')) lastrefreshtimestamp_utc
, accountid accountid
, servicearn arn
, TRY("split_part"(servicearn, ':', 4)) region
, TRY("split_part"(servicearn, ':', 3)) service
, TRY("split_part"(servicearn, '/', 3)) name
, 'ecs_service' module
, effectiverecommendationpreferencessavingsestimationmodesource recommendationsourcetype
, finding finding
, '' reason
, lookbackperiodindays lookbackperiodindays
, '' currentperformancerisk
, errorcode errorcode
, errormessage errormessage
, 'na' ressouce_details
, CONCAT(
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
recommendationoptions_1_projectedutilizationmetrics_cpu_maximum_upperboundvalue, ';',
recommendationoptions_1_projectedutilizationmetrics_cpu_maximum_lowerboundvalue, ';',
recommendationoptions_1_projectedutilizationmetrics_memory_maximum_upperboundvalue, ';',
recommendationoptions_1_projectedutilizationmetrics_memory_maximum_lowerboundvalue
) utilizationmetrics
, 'Recommendation' option_name
, '' option_from
, CONCAT(
recommendationoptions_1_cpu, ';',
recommendationoptions_1_memory
) option_to
, recommendationoptions_1_estimatedmonthlysavings_currency currency
, 0E0 monthlyprice
, 0E0 hourlyprice
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as estimatedmonthlysavings_value
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as estimatedmonthly_ondemand_cost_change
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as max_estimatedmonthlysavings_value_very_low
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as max_estimatedmonthlysavings_value_low
, COALESCE(TRY_CAST(recommendationoptions_1_estimatedmonthlysavings_value AS double), 0E0) as max_estimatedmonthlysavings_value_medium
, CONCAT(
CONCAT(TRY("split_part"(servicearn, '/', 2)), ';'),
CONCAT(COALESCE(recommendations_count, ''), ';'),
CONCAT(
COALESCE(recommendationoptions_1_savingsopportunitypercentage, ''), ';',
COALESCE(recommendationoptions_1_estimatedmonthlysavingsafterdiscounts_value, ''), ';',
COALESCE(recommendationoptions_1_savingsopportunitypercentageafterdiscounts, ''), ';',
COALESCE(TRY("split_part"(currentserviceconfiguration_taskdefinitionarn, '/', 2)), ''), ';',
COALESCE(currentserviceconfiguration_taskdefinitionarn, ''), ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';',
'', ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_1, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_1, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_2, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_2, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_3, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_3, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_4, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_4, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_5, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_5, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_6, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_6, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_7, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_7, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_8, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_8, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_9, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_9, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemory_10, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containermemoryreservation_10, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_1, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_2, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_3, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_4, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_5, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_6, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_7, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_8, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_9, ''), ';'),
CONCAT(COALESCE(recommendationoptions_1_containercpu_10, ''), ';')
) option_details
, tags tags
FROM
compute_optimizer_ecs_service_lines
WHERE (servicearn LIKE '%arn:%')
)
18 changes: 16 additions & 2 deletions cid/builtin/core/data/resources.yaml
Original file line number Diff line number Diff line change
@@ -772,7 +772,15 @@ views:
parameters:
s3FolderPath:
default: 's3://cid-data-{account_id}/compute_optimizer/compute_optimizer_rds_database'
description: Compute Optimiser RDS report S3 path
description: Compute Optimizer RDS report S3 path

compute_optimizer_ecs_service_lines:
type: Glue_Table
File: co/ecs_service.json
parameters:
s3FolderPath:
default: 's3://cid-data-{account_id}/compute_optimizer/compute_optimizer_ecs_service'
description: Compute Optimizer ECS report S3 path

compute_optimizer_auto_scale_lines:
type: Glue_Table
@@ -834,6 +842,12 @@ views:
views:
- compute_optimizer_rds_instance_lines

compute_optimizer_ecs_service_options:
File: co/ecs_service_options.sql
dependsOn:
views:
- compute_optimizer_ecs_service_lines

compute_optimizer_all_options:
File: co/all_options.sql
dependsOn:
@@ -844,7 +858,7 @@ views:
- compute_optimizer_lambda_options
- compute_optimizer_rds_instance_options
- compute_optimizer_rds_storage_options

- compute_optimizer_ecs_service_options

# Shared views
account_map:
72 changes: 30 additions & 42 deletions cid/common.py
Original file line number Diff line number Diff line change
@@ -3,14 +3,14 @@
import urllib
import logging
import functools
import webbrowser
from string import Template
from typing import Dict
from pkg_resources import resource_string
from importlib.metadata import entry_points
from functools import cached_property

import yaml
import click
import requests
from botocore.exceptions import ClientError, NoCredentialsError, CredentialRetrievalError

@@ -362,12 +362,11 @@ def get_template_parameters(self, parameters: dict, param_prefix: str='', others
prefix = '' if value.get('global') else param_prefix
if isinstance(value, str):
params[key] = value
elif isinstance(value, dict) and str(value.get('type')).endswith('tag_and_cost_category_fields'):
cur_version = '2' if str(value.get('type')).startswith('cur2.') else '1'
elif isinstance(value, dict) and value.get('type') == 'cur.tag_and_cost_category_fields':
params[key] = get_parameter(
param_name=prefix + key,
message=f"Required parameter: {key} ({value.get('description')})",
choices=self.get_cur(cur_version).tag_and_cost_category_fields + ["'none'"],
choices=self.cur.tag_and_cost_category_fields + ["'none'"],
)
elif isinstance(value, dict) and value.get('type') == 'athena':
if get_parameters().get(prefix + key): # priority to user input
@@ -434,9 +433,7 @@ def _deploy(self, dashboard_id: str=None, recursive=True, update=False, **kwargs

self.ensure_subscription()

# In case if we cannot discover datasets, we need to discover dashboards
# TODO: check if datasets returns explicit permission denied and only then discover dashboards as a workaround
self.qs.dashboards
self.qs.pre_discover()

dashboard_id = dashboard_id or get_parameters().get('dashboard-id')
category_filter = [cat for cat in get_parameters().get('category', '').upper().split(',') if cat]
@@ -479,7 +476,7 @@ def _deploy(self, dashboard_id: str=None, recursive=True, update=False, **kwargs
dashboard_definition = self.get_definition("dashboard", id=dashboard_id)
dashboard = None
try:
dashboard = self.qs.discover_dashboard(dashboardId=dashboard_id)
dashboard = self.qs.discover_dashboard(dashboard_id)
except CidCritical:
pass

@@ -528,11 +525,10 @@ def _deploy(self, dashboard_id: str=None, recursive=True, update=False, **kwargs

compatible = self.check_dashboard_version_compatibility(dashboard_id)
if not recursive and compatible == False:
if get_parameter(
if get_yesno_parameter(
param_name=f'confirm-recursive',
message=f'This is a major update and require recursive action. This could lead to the loss of dataset customization. Continue anyway?',
choices=['yes', 'no'],
default='yes') != 'yes':
default='yes'):
return
logger.info("Switch to recursive mode")
recursive = True
@@ -654,19 +650,17 @@ def open(self, dashboard_id, **kwargs):
if not dashboard_id:
dashboard_id = self.qs.select_dashboard(force=True)

dashboard = self.qs.discover_dashboard(dashboardId=dashboard_id)

click.echo('Getting dashboard status...', nl=False)
if dashboard is not None:
if dashboard.version.get('Status') not in ['CREATION_SUCCESSFUL']:
print(json.dumps(dashboard.version.get('Errors'),
indent=4, sort_keys=True, default=str))
click.echo(
f'\nDashboard is unhealthy, please check errors above.')
click.echo('healthy, opening...')
click.launch(self.qs_url.format(dashboard_id=dashboard_id, **self.qs_url_params))
else:
click.echo('not deployed.')
dashboard = self.qs.discover_dashboard(dashboard_id)

logger.info('Getting dashboard status...')
if not dashboard:
logger.error(f'{dashboard_id} is not deployed.')
return None
if dashboard.version.get('Status') not in ['CREATION_SUCCESSFUL', 'UPDATE_IN_PROGRESS', 'UPDATE_SUCCESSFUL']:
cid_print(json.dumps(dashboard.version.get('Errors'), indent=4, sort_keys=True, default=str))
cid_print(f'Dashboard {dashboard_id} is unhealthy, please check errors above.')
logger.info('healthy, opening...')
webbrowser.open(self.qs_url.format(dashboard_id=dashboard_id, **self.qs_url_params))

return dashboard_id

@@ -683,7 +677,7 @@ def status(self, dashboard_id, **kwargs):
if not dashboard_id:
print('No dashboard selected')
return
dashboard = self.qs.discover_dashboard(dashboardId=dashboard_id)
dashboard = self.qs.discover_dashboard(dashboard_id)

if dashboard is not None:
dashboard.display_status()
@@ -725,11 +719,7 @@ def status(self, dashboard_id, **kwargs):
logger.info(f'Updating dashboard: {dashboard.id} with Recursive = {recursive}')
self._deploy(dashboard_id, recursive=recursive, update=True)
logger.info('Rediscover dashboards after update')

refresh_overrides = [
dashboard.id
]
self.qs.discover_dashboards(refresh_overrides = refresh_overrides)
self.qs.discover_dashboards(refresh_overrides=[dashboard.id])
self.qs.clear_dashboard_selection()
dashboard_id = None
else:
@@ -748,7 +738,7 @@ def delete(self, dashboard_id, **kwargs):
return

if self.qs.dashboards and dashboard_id in self.qs.dashboards:
datasets = self.qs.discover_dashboard(dashboardId=dashboard_id).datasets # save for later
datasets = self.qs.discover_dashboard(dashboard_id).datasets # save for later
else:
dashboard_definition = self.get_definition("dashboard", id=dashboard_id)
datasets = {d: None for d in (dashboard_definition or {}).get('dependsOn', {}).get('datasets', [])}
@@ -793,16 +783,14 @@ def delete_dataset(self, name: str, id: str=None):
logger.debug(f'Picking the first of dataset databases: {dataset.schemas}')
self.athena.DatabaseName = schema

if get_parameter(
if get_yesno_parameter(
param_name=f'confirm-{dataset.name}',
message=f'Delete QuickSight Dataset {dataset.name}?',
choices=['yes', 'no'],
default='no') == 'yes':
default='no'):
print(f'Deleting dataset {dataset.name} ({dataset.id})')
self.qs.delete_dataset(dataset.id)
else:
logger.info(f'Skipping dataset {dataset.name}')
print (f'Skipping dataset {dataset.name}')
cid_print(f'Skipping dataset {dataset.name}')
return False
if not dataset.datasources:
continue
@@ -855,7 +843,7 @@ def delete_view(self, view_name):
def cleanup(self, **kwargs):
"""Delete unused resources (QuickSight datasets not used in Dashboards)"""

self.qs.discover_dashboards()
self.qs.pre_discover()
self.qs.discover_datasets()
references = {}
for dashboard in self.qs.dashboards.values():
@@ -893,9 +881,9 @@ def _share(self, dashboard_id, **kwargs):
return
else:
# Describe dashboard by the ID given, no discovery
self.qs.discover_dashboard(dashboardId=dashboard_id)
self.qs.discover_dashboard(dashboard_id)

dashboard = self.qs.discover_dashboard(dashboardId=dashboard_id)
dashboard = self.qs.discover_dashboard(dashboard_id)

if dashboard is None:
print('not deployed.')
@@ -1066,7 +1054,7 @@ def update(self, dashboard_id, recursive=False, force=False, **kwargs):
def check_dashboard_version_compatibility(self, dashboard_id):
""" Returns True | False | None if could not check """
try:
dashboard = self.qs.discover_dashboard(dashboardId=dashboard_id)
dashboard = self.qs.discover_dashboard(dashboard_id)
except CidCritical:
print(f'Dashboard "{dashboard_id}" is not deployed')
return None
@@ -1086,7 +1074,7 @@ def check_dashboard_version_compatibility(self, dashboard_id):

def update_dashboard(self, dashboard_id, dashboard_definition):

dashboard = self.qs.discover_dashboard(dashboardId=dashboard_id)
dashboard = self.qs.discover_dashboard(dashboard_id)
if not dashboard:
print(f'Dashboard "{dashboard_id}" is not deployed')
return
@@ -1363,7 +1351,7 @@ def create_or_update_dataset(self, dataset_definition: dict, dataset_id: str=Non
# Read dataset definition from template
data = self.get_data_from_definition('dataset', dataset_definition)
template = Template(json.dumps(data))
cur1_required = dataset_definition.get('dependsOn', dict()).get('cur')
cur1_required = dataset_definition.get('dependsOn', dict()).get('cur') or dataset_definition.get('dependsOn', dict()).get('cur1')
cur2_required = dataset_definition.get('dependsOn', dict()).get('cur2')
athena_datasource = None

57 changes: 31 additions & 26 deletions cid/export.py
Original file line number Diff line number Diff line change
@@ -43,17 +43,7 @@ def escape_id(id_):
def choose_analysis(qs):
""" Choose analysis """
try:
analyzes = []
logger.info("Discovering analyses")
paginator = qs.client.get_paginator('list_analyses')
response_iterator = paginator.paginate(
AwsAccountId=qs.account_id,
PaginationConfig={'MaxItems': 100}
)
for page in response_iterator:
analyzes.extend(page.get('AnalysisSummaryList'))
if len(analyzes) == 100:
logger.info('Too many analyses. Will show first 100')
analyzes = qs.client.get_paginator('list_analyses').paginate(AwsAccountId=qs.account_id).search('AnalysisSummaryList')
except qs.client.exceptions.AccessDeniedException:
logger.info("AccessDeniedException while discovering analyses")
return None
@@ -147,7 +137,7 @@ def export_analysis(qs, athena, glue):
"ImportMode": dataset.raw['ImportMode'],
}

for key, value in dataset_data['PhysicalTableMap'].items():
for key, value in dataset_data['PhysicalTableMap'].items(): # iterate all sub tables
if 'RelationalTable' in value \
and 'DataSourceArn' in value['RelationalTable'] \
and 'Schema' in value['RelationalTable']:
@@ -183,22 +173,26 @@ def export_analysis(qs, athena, glue):
#FIXME add value['Source']['DataSetArn'] to the list of dataset_arn
raise CidCritical(f"DataSet {dataset.raw['Name']} contains unsupported join. Please replace join of {value.get('Alias')} from DataSet to DataSource")

# Checking if datasets is based on CUR. It is rather are rare case as typically views depend on CUR.
cur_version = False
cur_fields = None
for dep_view in dependency_views[:]:
version = cur_helper.table_is_cur(name=dep_view)
if version:
dependency_views.remove(dep_view)
cur_version = True
cur_version = version
cur_fields = True # FIXME: check the list of fields in datasets and fields in cur. Put a list instead of 'cur2: True'

datasets[dataset_name] = {
'data': dataset_data,
'dependsOn': {'views': dependency_views},
'schedules': ['default'], #FIXME: need to read a real schedule
}
# FIXME: add a list of all columns used in the view

if cur_version == '1':
datasets[dataset_name]['dependsOn']['cur'] = True
datasets[dataset_name]['dependsOn']['cur'] = cur_fields
elif cur_version == '2':
datasets[dataset_name]['dependsOn']['cur2'] = True
datasets[dataset_name]['dependsOn']['cur2'] = cur_fields

all_views = [view_and_database[0] for view_and_database in all_views_and_databases]
all_databases = [view_and_database[1] for view_and_database in all_views_and_databases]
@@ -227,19 +221,30 @@ def export_analysis(qs, athena, glue):
deps = view_data.get('dependsOn', {})
non_cur_dep_views = []
for dep_view in deps.get('views', []):
dep_view_name = dep_view.split('.')[-1]
if dep_view_name in cur_tables or cur_helper.table_is_cur(name=dep_view_name):
cur_version = cur_helper.table_is_cur(name=dep_view_name)
logger.debug(f'{dep_view_name} is cur')
view_data['dependsOn']['cur'] = True
if '.' in dep_view:
dep_view_name = dep_view.split('.')[1].replace('"','')
dep_view_database = dep_view.split('.')[0].replace('"','')
else:
dep_view_name = dep_view
dep_view_database = athena.DatabaseName

if dep_view_name in cur_tables or cur_helper.table_is_cur(name=dep_view_name, database=dep_view_database):
cur_helper.set_cur(table=dep_view_name, database=dep_view_database)
cid_print(f' {dep_view_name} is CUR {cur_helper.version}')
# replace cur table name with a variable
if isinstance(view_data.get('data'), str):
# cur tables treated separately as we don't manage CUR table here
if dep_view_name != 'cur':
backslash = "\\" # workaround f-string limitation
view_data['data'] = view_data['data'].replace(f'{dep_view_name}', f'"${backslash}cur{cur_version}_database{backslash}"."${backslash}cur{cur_version}_table_name{backslash}"')
else:
pass # FIXME: this replace is too dangerous as cur can be a part of other words. Need to find some other way
cur_replacement = {
'2': ["${cur2_database}","${cur2_table_name}"],
'1': ["${cur_database}","${cur_table_name}"],
}[cur_helper.version]
view_data['data'] = re.sub(r'\b' + re.escape(dep_view) + r'\b', cur_replacement[1], view_data['data'])
view_data['data'] = re.sub(r'\b' + re.escape(dep_view_database) + r'\b', cur_replacement[0], view_data['data'])
fields = []
for field in cur_helper.fields:
if field in view_data['data']:
fields.append(field)
view_data['dependsOn'][f'cur{cur_helper.version}'] = fields or True
cur_tables.append(dep_view_name)
else:
logger.debug(f'{dep_view_name} is not cur')
19 changes: 10 additions & 9 deletions cid/helpers/athena.py
Original file line number Diff line number Diff line change
@@ -81,16 +81,18 @@ def DatabaseName(self) -> str:
athena_databases = self.list_databases()

# check if we have a default database
print(athena_databases)
logger.info(f'athena_databases = {athena_databases}')
default_databases = [database for database in athena_databases if database == self.defaults.get('DatabaseName')]
if 'cid_cur' in athena_databases:
default_databases = ['cid_cur']

# Ask user
choices = list(athena_databases)
if self.defaults.get('DatabaseName') not in choices:
choices.append(self.defaults.get('DatabaseName') + ' (CREATE NEW)')
self._DatabaseName = get_parameter(
param_name='athena-database',
message="Select AWS Athena database to use",
message="Select AWS Athena database to use as default",
choices=choices,
default=default_databases[0] if default_databases else None,
)
@@ -391,11 +393,10 @@ def wait_for_view(self, view_name: str, poll_interval=1, timeout=60) -> None:


def delete_table(self, name: str, catalog: str=None, database: str=None):
if get_parameter(
if not get_yesno_parameter(
param_name=f'confirm-{name}',
message=f'Delete Athena table {name}?',
choices=['yes', 'no'],
default='no') != 'yes':
default='no'):
return False

try:
@@ -415,11 +416,10 @@ def delete_table(self, name: str, catalog: str=None, database: str=None):
return True

def delete_view(self, name: str, catalog: str=None, database: str=None):
if get_parameter(
if not get_yesno_parameter(
param_name=f'confirm-{name}',
message=f'Delete Athena view {name}?',
choices=['yes', 'no'],
default='no') != 'yes':
default='no'):
return False

try:
@@ -545,7 +545,8 @@ def create_or_update_view(self, view_name, view_query):
param_name='view-' + view_name + '-override',
message=f'The existing view is different. Override?',
choices=['retry diff', 'proceed and override', 'keep existing', 'exit'],
default='retry diff'
default='retry diff',
fuzzy=False,
)
if choice == 'retry diff':
unset_parameter('view-' + view_name + '-override')
22 changes: 14 additions & 8 deletions cid/helpers/cur.py
Original file line number Diff line number Diff line change
@@ -148,10 +148,10 @@ def ensure_column(self, column: str, column_type: str=None):
""" Ensure column is in the cur. If it is not there - add column """
pass

def table_is_cur(self, table: dict=None, name: str=None, return_reason: bool=False) -> bool:
def table_is_cur(self, table: dict=None, name: str=None, return_reason: bool=False, database: str=None) -> bool:
""" return cur version if table metadata fits CUR definition. """
try:
table = table or self.athena.get_table_metadata(name)
table = table or self.athena.get_table_metadata(name, database)
except Exception as exc: #pylint: disable=broad-exception-caught
logger.warning(exc)
return False if not return_reason else (False, f'cannot get table {name}. {exc}.')
@@ -201,8 +201,10 @@ def tag_and_cost_category_fields(self) -> list:
class CUR(AbstractCUR):
"""This Class represents CUR table (1 or 2 versions)"""

def __init__(self, athena, glue):
def __init__(self, athena, glue, database: str=None, table: str=None):
super().__init__(athena, glue)
if database and table:
self.set_cur(database, table)

@property
def metadata(self) -> dict:
@@ -213,16 +215,20 @@ def metadata(self) -> dict:
# good place to set a database for athena
return self._metadata

def find_cur(self):

def set_cur(self, database: str=None, table: str=None):
self._database, self._metadata = self.find_cur(database, table)

def find_cur(self, database: str=None, table: str=None):
"""Choose CUR"""
metadata = None
cur_database = get_parameters().get('cur-database')
if get_parameters().get('cur-table-name'):
table_name = get_parameters().get('cur-table-name')
cur_database = database or get_parameters().get('cur-database')
if table or get_parameters().get('cur-table-name'):
table_name = table or get_parameters().get('cur-table-name')
try:
metadata = self.athena.get_table_metadata(table_name, cur_database)
except self.athena.client.exceptions.MetadataException as exc:
raise CidCritical(f'Provided cur-table-name "{table_name}" in database "{cur_database or self.athena.DatabaseName}" is not found. Please make sure the table exists.') from exc
raise CidCritical(f'Provided cur-table-name "{table_name}" in database "{cur_database or self.athena.DatabaseName}" is not found. Please make sure the table exists. This could also indicate a LakeFormation permission issue, see our FAQ for help.') from exc
res, message = self.table_is_cur(table=metadata, return_reason=True)
if not res:
raise CidCritical(f'Table {table_name} does not look like CUR. {message}')
2 changes: 1 addition & 1 deletion cid/helpers/glue.py
Original file line number Diff line number Diff line change
@@ -73,7 +73,7 @@ def create_or_update_crawler(self, crawler_definition) -> None:
self.client.update_crawler(**crawler_definition)
except self.client.exceptions.ClientError as exc:
if 'Service is unable to assume provided role' in str(exc):
logger.info('attempt{attempt}: Retrying ') # sometimes newly created roles cannot be assumed right away
logger.info(f'attempt{attempt}: Retrying ') # sometimes newly created roles cannot be assumed right away
time.sleep(3)
continue
logger.error(crawler_definition)
2 changes: 1 addition & 1 deletion cid/helpers/iam.py
Original file line number Diff line number Diff line change
@@ -71,7 +71,7 @@ def ensure_managed_policies_attached(self, role_name, policies_arns='') -> None:
RoleName=role_name,
PolicyArn=policy_arn,
)
logger.info('Attached {policy_arn} to the role {role_name}')
logger.info(f'Attached {policy_arn} to the role {role_name}')
except self.client.exceptions.ClientError as exc:
logger.warning(f'Unable to attach policy {policy_arn} to {role_name}: {exc}')

320 changes: 103 additions & 217 deletions cid/helpers/quicksight/__init__.py

Large diffs are not rendered by default.

240 changes: 212 additions & 28 deletions cid/helpers/quicksight/dashboard.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,18 @@
import click
import io
import json
import logging
from typing import Dict

import yaml

from cid.helpers.quicksight.resource import CidQsResource
from cid.helpers.quicksight.definition import Definition as CidQsDefinition
from cid.helpers.quicksight.template import Template as CidQsTemplate
from cid.utils import cid_print, get_yesno_parameter
from cid.helpers.quicksight.resource import CidQsResource
from cid.helpers.quicksight.dataset import Dataset
from cid.helpers.quicksight.version import CidVersion


logger = logging.getLogger(__name__)

@@ -16,35 +21,162 @@ class Dashboard(CidQsResource):
def __init__(self, raw: dict, qs=None) -> None:
super().__init__(raw)
# Initialize properties
self.datasets: Dict[str, str] = {}
# Deployed template
self._datasets: Dict[str, str] = {}
self._definition: Dict = None # CID definition from resource yaml. THIS IS NOT QS DEFINITION!
self._tag_version: str = None
self._deployed_template: CidQsTemplate = None
self._deployed_definition: CidQsDefinition = None
self._status = str()
self.status_detail = str()
# Source template in origin account
self.source_template: CidQsTemplate = None
self.source_definition: CidQsDefinition = None
self._status: str = ''
self.status_detail: str = ''
self._source_template: CidQsTemplate = None
self._source_definition: CidQsDefinition = None
self._cid_version = None
self.qs = qs

@property
def id(self) -> str:
'''DashboardId'''
return self.get_property('DashboardId')


@property
def arn(self) -> str:
'''Arn'''
return self.get_property('Arn').split('/version/')[0]


@property
def version(self) -> dict:
'''dashboard's data in the current version. Please note it is not cid version'''
return self.get_property('Version')


@property
def definition(self):
''' CID definition from resource yaml. THIS IS NOT QS DEFINITION!
'''
if self._definition is not None:
return self._definition
# Look for dashboard definition by DashboardId in the catalog of supported dashboards (the currently available definitions in their latest public version)
# This definition can be used to determine the gap between the latest public version and the currently deployed version
self._definition = next((v for v in self.qs.supported_dashboards.values() if v['dashboardId'] == self.id), None)

if not self._definition:
# Look for dashboard definition by templateId.
# This is for a specific use-case when a dashboard with another id points to managed template
logger.debug(f'dashboard "{self.id} is not found in supported dashboards by id, will try to match by template."')
source_arn = self.raw.get('Version', {}).get('SourceEntityArn', '')
if source_arn:
template_id = source_arn.split('/version/')[0].split('/')[-1]
template_account = source_arn.split(':')[4]
self._definition = next((v for v in self.qs.supported_dashboards.values() if 'templateId' in v and v['templateId'] == template_id), None)
if not self._definition:
self._definition = {}
logger.info(f'Unsupported dashboard "{self.name}"')
return self._definition


@property
def source_template(self) -> CidQsTemplate:
# Fetch the latest version of source_template referenced in definition
if self._source_template:
return self._source_template
template_id = self.definition.get('templateId')
if template_id:
source_template_account_id = self.definition.get('sourceAccountId')
region = self.definition.get('region', 'us-east-1')
try:
logger.debug(f'Loading latest source template {template_id} from source account {source_template_account_id} in {region}')
self._source_template = self.qs.describe_template(
template_id,
account_id=source_template_account_id,
region=region
)
except Exception as exc:
logger.debug(exc, exc_info=True)
logger.info(f'Unable to describe template {template_id} in {source_template_account_id} ({region})')
return self._source_template

@property
def _patch_template_version(self, template):
# Checking for version override in template definition
# Check for extra information from resource definition
version_obj = self.definition.get('versions', dict())
min_template_version = _safe_int(version_obj.get('minTemplateVersion'))
default_description_version = version_obj.get('minTemplateDescription')

if not isinstance(template, CidQsTemplate)\
or int(template.version) <= 0 \
or not version_obj:
return

logger.debug("versions object found in template")
version_map = version_obj.get('versionMap', dict())
description_override = version_map.get(int(template.version))

try:
if description_override:
logger.info(f"Template description is overridden with: {description_override}")
description_override = str(description_override)
template.raw['Version']['Description'] = description_override
else:
if min_template_version and default_description_version:
if int(template.version) <= min_template_version:
logger.info(f"The template version does not provide cid_version in description, using the default template description: {default_description_version}")
template.raw['Version']['Description'] = default_description_version
except ValueError as val_error:
logger.debug(val_error, exc_info=True)
logger.info("The provided values of the versions object are not well formed, please use int for template version and str for template description")
except Exception as exc:
logger.debug(exc, exc_info=True)
logger.info("Unable to override template description")

@property
def deployed_template(self) -> CidQsTemplate:
''' Fetch template referenced as current dashboard source (if any)
'''
if self._deployed_template:
return self._deployed_template
_template_arn = self.version.get('SourceEntityArn')

if _template_arn and isinstance(_template_arn, str) \
and len(_template_arn.split(':')) > 5 \
and _template_arn.split(':')[5].startswith('template/'):
params = {
"region": _template_arn.split(':')[3],
"account_id": _template_arn.split(':')[4],
"template_id": _template_arn.split('/')[1],
}
if '/version/' in _template_arn:
params['version_number'] = int(_template_arn.split('/version/')[-1] or 0)
else:
# in some older deployments versions was not referenced so we try to get it from resources yaml
version_obj = self.definition.get('versions', {}) if self.definition else {}
min_template_version = int(version_obj.get('minTemplateVersion', 0)) # 0 is not a valid version for template. it starts with 1
if min_template_version:
logger.debug(f"Using default version number {min_template_version} in place")
params['version_number'] = min_template_version
else:
logger.debug("Minimum template version could not be found for Dashboard {self.id}: {_template_arn}. We cannot describe deployed template and get the version.")
return self._deployed_template # None
try:
logger.debug(f'Describing template {_template_arn}')
_template = self.qs.describe_template(**params)
if isinstance(_template, CidQsTemplate):
self._deployed_template = _template
except Exception as exc:
logger.debug(exc, exc_info=True)
logger.debug(f'Unable to describe template for {self.id}, {exc}')
return self._deployed_template

@deployed_template.setter
def deployed_template(self, template: CidQsTemplate) -> None:
self._deployed_template = template

@property
def deployed_definition(self) -> CidQsTemplate:
def deployed_definition(self):
if not self._deployed_definition:
self._deployed_definition = self.qs.describe_dashboard_definition(dashboard_id=self.id, refresh=True)
return self._deployed_definition

@deployed_definition.setter
@@ -63,14 +195,42 @@ def template_arn(self) -> str:
return self.deployed_template.arn
return None


def get_dataset_ids(self):
return [dataset.split('/')[-1] for dataset in self.version.get('DataSetArns', [])]

@property
def deployed_cid_version(self) -> int:
if isinstance(self.deployed_template, CidQsTemplate):
return self.deployed_template.cid_version
elif isinstance(self.deployed_definition, CidQsDefinition):
return self.deployed_definition.cid_version
else:
return None
def datasets(self):
if self._datasets:
return self._datasets
for dataset_id in self.get_dataset_ids():
try:
_dataset = self.qs.describe_dataset(id=dataset_id)
if not isinstance(_dataset, Dataset):
logger.debug(f'Dataset "{dataset_id}" is missing')
else:
logger.trace(f"Detected dataset: \"{_dataset.name}\" ({_dataset.id} in {self.id})")
self._datasets[_dataset.name] = _dataset.id
except self.qs.client.exceptions.AccessDeniedException:
logger.debug(f'Access denied describing DataSetId {dataset_id} for Dashboard {self.id}')
except self.qs.client.exceptions.InvalidParameterValueException:
logger.debug(f'Invalid dataset {dataset_id}')
logger.info(f"{self.name} has {len(self.datasets)} datasets")
return self._datasets

@property
def views(self):
# Fetch all views recursively
all_views = []
def _recursive_add_view(view):
all_views.append(view)
for dep_view in (self.qs.supported_views.get(view) or {}).get('dependsOn', {}).get('views', []):
_recursive_add_view(dep_view)
for dataset_name in self.datasets or []:
for view in (self.qs.supported_datasets.get(dataset_name) or {}).get('dependsOn', {}).get('views', []):
_recursive_add_view(view)
return all_views


@property
def latest(self) -> bool:
@@ -85,13 +245,28 @@ def health(self) -> bool:
return self.status not in ['broken']

@property
def cid_version(self) -> int:
if self.deployed_template:
return self.deployed_template.cid_version
elif self.deployed_definition:
return self.deployed_definition.cid_version
def deployed_cid_version(self):
if self._cid_version:
return self._cid_version
tag_version = (self.qs.get_tags(self.arn) or {}).get('cid_version')
if tag_version:
logger.trace(f'version of {self.arn} from tag = {tag_version}')
self._cid_version = CidVersion(tag_version)
else:
return None
if self.deployed_template:
self._cid_version = self.deployed_template.cid_version
elif self.deployed_definition:
self._cid_version = self.deployed_definition.cid_version
if self._cid_version:
logger.trace(f'setting tag of {self.arn} to cid_version = {self._cid_version}')
self.qs.set_tags(self.arn, cid_version=self._cid_version)
return self._cid_version


@property
def cid_version(self): # for backward compatibility
return self.deployed_cid_version


@property
def latest_available_cid_version(self) -> int:
@@ -102,6 +277,21 @@ def latest_available_cid_version(self) -> int:
else:
return None

@property
def supported(self) -> bool:
return True if self.definition else False


@property
def source_definition(self):
if self._source_definition:
return self._source_definition
if 'data' in self.definition:
# Resolve source definition (the latest definition publicly available)
data_stream = io.StringIO(self.definition["data"])
definition_data = yaml.safe_load(data_stream) # FIXME: there can be template variables.
self._source_definition = CidQsDefinition(definition_data)
return self._source_definition

@property
def status(self) -> str:
@@ -113,12 +303,6 @@ def status(self) -> str:
# Not discovered yet
elif not self.definition:
self._status = 'undiscovered'
# Missing dataset
elif not self.datasets or (len(set(self.datasets)) < len(set(self.definition.get('dependsOn').get('datasets')))):
self.status_detail = 'missing dataset(s)'
self._status = 'broken'
logger.info(f"Found datasets: {self.datasets}")
logger.info(f"Required datasets: {self.definition.get('dependsOn').get('datasets')}")
# Source Template has changed
elif self.deployed_template and self.source_template and self.deployed_template.arn and self.source_template.arn and not self.deployed_template.arn.startswith(self.source_template.arn):
self._status = 'legacy'
2 changes: 0 additions & 2 deletions cid/helpers/quicksight/resource.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
class CidQsResource():
def __init__(self, raw: dict) -> None:
self.raw: dict = raw
# Resource definition
self.definition = dict()

@property
def name(self) -> str:
5 changes: 4 additions & 1 deletion cid/test/bats/10-deploy-update-delete/cudos.bats
Original file line number Diff line number Diff line change
@@ -18,7 +18,7 @@ cur_table="${cur_table:-cur1}" # If variable not set or null, use default. FIXME
--quicksight-user $quicksight_user \
--share-with-account \
--timezone 'Europe/Paris' \
--quicksight-datasource-id $quicksight_datasource_id
--quicksight-datasource-id $quicksight_datasource_id \


[ "$status" -eq 0 ]
@@ -60,16 +60,19 @@ cur_table="${cur_table:-cur1}" # If variable not set or null, use default. FIXME
run cid-cmd -vv --yes update --force --recursive \
--dashboard-id cudos-v5 \
--cur-table-name $cur_table \
--athena-database $database_name\
--athena-workgroup primary\
--timezone 'Europe/Paris' \
--quicksight-user $quicksight_user \
--quicksight-datasource-id $quicksight_datasource_id \

[ "$status" -eq 0 ]
}


@test "Delete runs" {
run cid-cmd -vv --yes delete \
--athena-database $database_name\
--athena-workgroup primary\
--dashboard-id cudos-v5

56 changes: 36 additions & 20 deletions cid/utils.py
Original file line number Diff line number Diff line change
@@ -11,7 +11,8 @@
from collections.abc import Iterable

import requests
import questionary
from InquirerPy import inquirer
from InquirerPy.base.control import Choice
from boto3.session import Session
from botocore.exceptions import NoCredentialsError, CredentialRetrievalError, NoRegionError, ProfileNotFound

@@ -168,6 +169,7 @@ def set_parameters(parameters: dict, all_yes: bool=None) -> None:
if all_yes != None:
global _all_yes
_all_yes = all_yes
logger.debug(f'all_yes={all_yes}')

def get_parameters():
return dict(params)
@@ -185,24 +187,29 @@ def get_yesno_parameter(param_name: str, message: str, default: str=None, break_
unset_parameter(param_name)
if default is not None:
default = default.lower()
default = 'yes' if mapping[default] else 'no'
res = get_parameter(param_name, message=message, choices=['yes', 'no'], default=default, break_on_ctrl_c=break_on_ctrl_c)
params[param_name] = (res == 'yes')
default = 'yes' if mapping.get(default) else 'no'

if _all_yes:
params[param_name] = True
else:
res = get_parameter(param_name, message=message, choices=['yes', 'no'], default=default, break_on_ctrl_c=break_on_ctrl_c, fuzzy=False)
params[param_name] = (res == 'yes')
return params[param_name]


def get_parameter(param_name, message, choices=None, default=None, none_as_disabled=False, template_variables={}, break_on_ctrl_c=True):
def get_parameter(param_name, message, choices=None, default=None, none_as_disabled=False, template_variables={}, break_on_ctrl_c=True, fuzzy=True):
"""
Check if parameters are provided in the command line and if not, ask user
Check if parameters are provided in the command line and if not, ask user
:param message: text message for user
:param choices: a list or dict for choice. None for text entry. Keys and Values must be strings.
:param default: a default text template
:param none_as_disabled: if True and choices is a dict, all choices with None as a value will be disabled
:param template_variables: a dict with varibles for template
:param template_variables: a dict with variables for template
:param break_on_ctrl_c: if True, exit() if user pressed CTRL+C
:param fuzzy: if True, exit() if user pressed CTRL+C
:returns: a value choosed by user or provided in command line
:returns: a value from user or provided in command line
"""
logger.debug(f'getting param {param_name}')
param_name = param_name.replace('_', '-')
@@ -217,39 +224,48 @@ def get_parameter(param_name, message, choices=None, default=None, none_as_disab
return value

if choices is not None:
if 'yes' in choices and _all_yes:
if _all_yes and ('yes' in choices):
return 'yes'
if isinstance(choices, dict):
_choices = []
for key, value in choices.items():
_choices.append(
questionary.Choice(
title=key,
Choice(
name=key,
value=value,
disabled=True if (none_as_disabled and value is None) else False,
enabled=not (none_as_disabled and value is None),
)
)
choices = _choices

print()
if not isatty():
raise Exception(f'Please set parameter {param_name}. Unable to request user in environment={exec_env()}')
result = questionary.select(
message=f'[{param_name}] {message}:',
choices=choices,
default=default,
).ask()
if fuzzy:
result = inquirer.fuzzy(
message=f'[{param_name}] {message}:',
choices=choices,
long_instruction='use arrows or start typing',
match_exact=True,
default=default,
).execute()
else:
result = inquirer.select(
message=f'[{param_name}] {message}:',
choices=choices,
long_instruction='use arrows or start typing',
default=default,
).execute()
else: # it is a text entry
if isinstance(default, str) and template_variables:
print(template_variables)
default=default.format(**template_variables)
print()
if not isatty():
raise Exception(f'Please set parameter {param_name}. Unable to request user in environment={exec_env()}')
result = questionary.text(
result = inquirer.text(
message=f'[{param_name}] {message}:' ,
default=default or '',
).ask()
).execute()
if isinstance(result, str) and template_variables:
result = result.format(**template_variables)
if (break_on_ctrl_c and result is None):
10,383 changes: 7,006 additions & 3,377 deletions dashboards/amazon-connect/amazon-connect.yaml

Large diffs are not rendered by default.

439 changes: 224 additions & 215 deletions dashboards/cora/cora.yaml

Large diffs are not rendered by default.

47 changes: 35 additions & 12 deletions dashboards/data-transfer/DataTransfer-Cost-Analysis-Dashboard.yaml
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@ dashboards:
- data_transfer_view
name: DataTransfer Cost Analysis Dashboard Enhanced
dashboardId: datatransfer-cost-analysis-dashboard
templateId: data-transfer-aga-est-cost-analysis-template-enhanced-v5
templateId: data-transfer-aga-cost-analysis-template-enhanced-v6
sourceAccountId: '869004330191'
region: us-east-1
datasets:
@@ -69,14 +69,19 @@ datasets:
Type: STRING
- Name: tbs
Type: DECIMAL
SubType: FIXED
- Name: usage_quantity
Type: DECIMAL
SubType: FIXED
- Name: blended_cost
Type: DECIMAL
SubType: FIXED
- Name: unblended_cost
Type: DECIMAL
SubType: FIXED
- Name: public_cost
Type: DECIMAL
SubType: FIXED
- Name: blended_rate
Type: STRING
- Name: unblended_rate
@@ -143,37 +148,55 @@ datasets:
views:
data_transfer_view:
dependsOn:
cur: true
cur2:
- product['servicename']
- product['product_name']
- product['transfer_type']
- product['region']
- product['transfer_type']
- line_item_blended_cost
- line_item_unblended_cost
- pricing_public_on_demand_cost
- line_item_usage_type
- product_from_location_type
- line_item_blended_rate
- line_item_unblended_rate
- pricing_public_on_demand_rate
- product_usagetype
data: |-
CREATE OR REPLACE VIEW "${athena_database_name}".data_transfer_view AS
SELECT
product_product_family product_family
, product_servicecode
, product_servicename
, product['servicename'] product_servicename
, line_item_product_code product_code
, line_item_usage_start_date usage_date
, bill_billing_period_start_date billing_period
, bill_payer_account_id payer_account_id
, line_item_usage_account_id linked_account_id
, product_product_name product_name
, product['product_name'] product_name
, line_item_line_item_type charge_type
, line_item_operation operation
, product_region region
, product['region'] region
, line_item_usage_type usage_type
, product_from_location from_location
, product_to_location to_location
, product_from_location_type from_location_type
, line_item_resource_id resource_id
, line_item_blended_rate blended_rate
, line_item_unblended_rate unblended_rate
, pricing_public_on_demand_rate public_ondemand_rate
, product['transfer_type'] data_transfer_type
, ("sum"((CASE WHEN (line_item_line_item_type = 'Usage') THEN line_item_usage_amount ELSE 0 END)) / 1024) TBs
, "sum"((CASE WHEN (line_item_line_item_type = 'Usage') THEN line_item_usage_amount ELSE 0 END)) usage_quantity
, "sum"(line_item_blended_cost) blended_cost
, "sum"(line_item_unblended_cost) unblended_cost
, "sum"(pricing_public_on_demand_cost) public_cost
, line_item_blended_rate blended_rate
, line_item_unblended_rate unblended_rate
, pricing_public_on_demand_rate public_ondemand_rate
, product_transfer_type data_transfer_type
FROM
"${athena_database_name}"."${cur_table_name}"
WHERE ((((line_item_usage_type LIKE '%Bytes%') AND (((line_item_usage_type LIKE '%In%') OR (line_item_usage_type LIKE '%Out%')) OR (line_item_usage_type LIKE '%Regional%'))) AND (line_item_line_item_type IN ('PrivateRateDiscount', 'Usage', 'EdpDiscount'))) AND ((((year = "format_datetime"(current_timestamp, 'YYYY')) AND (month = "format_datetime"(current_timestamp, 'MM'))) OR ((year = "format_datetime"(("date_trunc"('month', current_timestamp) - INTERVAL '2' MONTH), 'YYYY')) AND ((month = "format_datetime"(("date_trunc"('month', current_timestamp) - INTERVAL '2' MONTH), 'MM')) OR (month = "format_datetime"(("date_trunc"('month', current_timestamp) - INTERVAL '2' MONTH), 'M'))))) OR ((year = "format_datetime"(("date_trunc"('month', current_timestamp) - INTERVAL '1' MONTH), 'YYYY')) AND ((month = "format_datetime"(("date_trunc"('month', current_timestamp) - INTERVAL '1' MONTH), 'MM')) OR (month = "format_datetime"(("date_trunc"('month', current_timestamp) - INTERVAL '1' MONTH), 'M'))))))
GROUP BY line_item_product_code, line_item_usage_start_date, bill_billing_period_start_date, line_item_usage_account_id, bill_payer_account_id, product_product_name, line_item_line_item_type, line_item_operation, product_region, product_product_family, product_servicecode, product_servicename, line_item_usage_type, product_from_location, product_to_location, product_from_location_type, line_item_resource_id, line_item_blended_rate, product_transfer_type, product_usagetype, pricing_public_on_demand_cost, pricing_public_on_demand_rate, line_item_unblended_rate, line_item_unblended_cost, line_item_blended_cost
"${cur2_database}"."${cur2_table_name}"
WHERE
line_item_usage_type LIKE '%Bytes%'
AND (line_item_usage_type LIKE '%In%' OR line_item_usage_type LIKE '%Out%' OR line_item_usage_type LIKE '%Regional%')
AND (line_item_line_item_type = 'Usage' OR line_item_line_item_type LIKE '%Discount')
AND cast(concat(billing_period, '-01') as timestamp) >= "date_trunc"('month', current_timestamp) - INTERVAL '2' MONTH
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21

Large diffs are not rendered by default.

572 changes: 395 additions & 177 deletions dashboards/health-events/health-events.yaml

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -5,4 +5,4 @@ requests
six>=1.15
tqdm
tzlocal>=4.0
questionary>=1.10
InquirerPy
2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
@@ -31,7 +31,7 @@ install_requires =
requests
tzlocal>=4.0
six>=1.15
questionary>=1.10
InquirerPy
tqdm

[options.entry_points]