nOps.ionOps.io
  • Home
  • Home
Expand All Collapse All
  • Getting Started
    • AWS IAM Policy - nOps Free Platform
    • Onboarding
      • Adding an AWS account to nOps with Automatic Setup
      • Adding an AWS Account to nOps with Manual Setup
      • Adding AWS Account(s) to nOps with Terraform
      • Adding Multiple AWS Account to nOps with CloudFormation
      • Adding AWS Child Accounts in nOps
      • Adding an Azure Account to nOps
    • Solution Providers
      • Add a Client from Partner Portal
      • Invite a Client for Well-Architected Assessment
    • Common Questions
    • AWS IAM Policy - Auto Scaling
    • AWS IAM Policy - ShareSave Resource Scheduler
    • AWS Setup/Permissions - ShareSave: Auto-Pilot Risk-Free Commitment Management
    • Azure Roles and Permissions
  • User Guides
    • Dashboard
      • nOps Dashboard
      • nOps Opportunities Dashboard
      • nOps Search DSL
    • Cost
      • Commitment Management - Working with Reserved Instances
      • Cloud Resources Cost - Stay on Top of Cost Changes
      • Resource Rightsizing - Tune Down Over Resourced EC2 Instances
      • Spot Advisor - Switch to Spot Instances
      • Tag Explorer - Manage Tag Competency
    • Rules
      • View IAM Violations
      • View Under-Utilized EBS Volumes
      • View Under-Utilized Network Resources
    • Reports
      • How to Create a Custom Template - Partners
      • How to Create and View Custom Reports
    • Security
      • Check if the Root Account has MFA Activated
      • View Security Violations
    • Workload
      • Create Workloads
      • Create Workloads - MSP
      • Add New Resources to Workloads
      • View and Manage Workloads
      • Attach Documents to Workloads
      • Evaluate Risk for a Workload
      • Workloads API
    • WAFR
      • Well-Architected Framework Review
      • Well-Architected Framework Report
      • Export an In-Progress WAFR Report
      • Invite Customer for WAFR Assessment
    • FTR
      • Foundational Technical Review
      • AWS Foundation Technical Review (FTR) Report
      • Foundational Technical Review Question Descriptions
    • AWS MAP
      • Automated Tagging for AWS Migration Assistance Program
    • Data Explorer
      • nOps Data Explorer
    • ShareSave
      • Getting Started with ShareSave
      • Graviton Recommendations
    • Settings
      • Add Users to nOps
      • Add a User to a Partner Account
      • Add a Client to the Partner Account
      • Notification Center
      • Disable Notifications for a User
      • Change Password
      • Recover a Forgotten Password
      • Switch Between Different AWS Accounts
      • Data Purge
    • Billing/Invoice
      • How to add new Customer Billing type MSP
      • How to Invoice for Clients - MSP
      • How to configure Chargebacks in Chargeback Center
      • How to View the Cost of a Kubernetes Service (Container Cost)
      • Partner Consolidated Billing Setup Guide
  • Integrations
    • EventBridge Integration
    • Jira Integration
    • PagerDuty Integration
    • Slack Integration
    • SSO Integration
    • Webhook Integrations
  • Solution Docs
    • Evaluating the Cost Impact of a Changeset
    • NAT Gateway Visibility
    • Utilize nOps Resource Scheduler with EventBridge Integration to Reduce Costs Automatically
    • nOps Cost Optimization Recommendations
  • Installations
    • Install Amazon CloudWatch on EC2 Instances
    • Install nOps K8s Agent
    • Install nOps AWS Lambda Forwarder Agent
  • Developer Documentation
    • Getting started with the nOps developer API
    • GET and POST APIs for /projectaws and /projectaws/{id}
  • Configurations
    • Customize Your Settings
    • Dashboard
      • Create a Custom Dashboard
    • Rules
      • Create Custom Rules
    • Settings
      • Configure SSO
      • Configure SSO for Azure
      • Configure SSO for Okta
      • Configure SSO for OneLogin
      • Configure Weekly Reports
      • Configure Weekly Reports V3
      • Perform Default Tagging
    • Solution Providers
      • Create New Recommendations
  • FAQ
    • FAQ
  • Hub Listing & Business Directory
    • post - file attached
    • post - file attached
    • post - print article
    • post - print article
    • Category page design
    • Category page design
    • post - article feedback
    • post - article feedback
    • Allow votings for only logged-in users
    • Allow votings for only logged-in users
    • post - with header image
    • post - with header image
  • Learning Management System
    • Click to view Private - article
    • Click to view Private - article
    • Click to view private - attached file
    • Click to view private - attached file
    • Easily display related posts
    • Easily display related posts
    • How do i access user admin section?
    • How do i access user admin section?
    • How can I change category slug name
    • How can I change category slug name
    • How can I allow voting for only logged-in users
    • How can I allow voting for only logged-in users
  • SEO Reports & Tools Platform
    • Setup home page layout
    • Setup home page layout
    • Important note while creating documentation pages
    • Important note while creating documentation pages
    • Using premium images within manual theme
    • Using premium images within manual theme
    • Where do i post support query?
    • Where do i post support query?
    • Subscriptions Modul Setup
    • Subscriptions Modul Setup
  • Support Ticket System
    • Custom Header
    • Custom Header
    • Print Article
    • Print Article
    • Attached Files
    • Attached Files
    • One Click Article Feedback
    • One Click Article Feedback
  • Ultimate Inventory Management
    • New to the wordpress?? lets get started
    • New to the wordpress?? lets get started
  • YouTube Marketing Application
    • Pay theme customization
    • Pay theme customization
    • How to get download item after pay
    • How to get download item after pay
    • How do i pay multiple license
    • How do i pay multiple license
    • how do i get a refund?
    • how do i get a refund?
    • How can i get offer
    • How can i get offer
    • Upgrade manual theme to the new version
    • Upgrade manual theme to the new version
  • Getting Started
    • AWS IAM Policy - nOps Free Platform
    • Onboarding
      • Adding an AWS account to nOps with Automatic Setup
      • Adding an AWS Account to nOps with Manual Setup
      • Adding AWS Account(s) to nOps with Terraform
      • Adding Multiple AWS Account to nOps with CloudFormation
      • Adding AWS Child Accounts in nOps
      • Adding an Azure Account to nOps
    • Solution Providers
      • Add a Client from Partner Portal
      • Invite a Client for Well-Architected Assessment
    • Common Questions
    • AWS IAM Policy - Auto Scaling
    • AWS IAM Policy - ShareSave Resource Scheduler
    • AWS Setup/Permissions - ShareSave: Auto-Pilot Risk-Free Commitment Management
    • Azure Roles and Permissions
  • User Guides
    • Dashboard
      • nOps Dashboard
      • nOps Opportunities Dashboard
      • nOps Search DSL
    • Cost
      • Commitment Management - Working with Reserved Instances
      • Cloud Resources Cost - Stay on Top of Cost Changes
      • Resource Rightsizing - Tune Down Over Resourced EC2 Instances
      • Spot Advisor - Switch to Spot Instances
      • Tag Explorer - Manage Tag Competency
    • Rules
      • View IAM Violations
      • View Under-Utilized EBS Volumes
      • View Under-Utilized Network Resources
    • Reports
      • How to Create a Custom Template - Partners
      • How to Create and View Custom Reports
    • Security
      • Check if the Root Account has MFA Activated
      • View Security Violations
    • Workload
      • Create Workloads
      • Create Workloads - MSP
      • Add New Resources to Workloads
      • View and Manage Workloads
      • Attach Documents to Workloads
      • Evaluate Risk for a Workload
      • Workloads API
    • WAFR
      • Well-Architected Framework Review
      • Well-Architected Framework Report
      • Export an In-Progress WAFR Report
      • Invite Customer for WAFR Assessment
    • FTR
      • Foundational Technical Review
      • AWS Foundation Technical Review (FTR) Report
      • Foundational Technical Review Question Descriptions
    • AWS MAP
      • Automated Tagging for AWS Migration Assistance Program
    • Data Explorer
      • nOps Data Explorer
    • ShareSave
      • Getting Started with ShareSave
      • Graviton Recommendations
    • Settings
      • Add Users to nOps
      • Add a User to a Partner Account
      • Add a Client to the Partner Account
      • Notification Center
      • Disable Notifications for a User
      • Change Password
      • Recover a Forgotten Password
      • Switch Between Different AWS Accounts
      • Data Purge
    • Billing/Invoice
      • How to add new Customer Billing type MSP
      • How to Invoice for Clients - MSP
      • How to configure Chargebacks in Chargeback Center
      • How to View the Cost of a Kubernetes Service (Container Cost)
      • Partner Consolidated Billing Setup Guide
  • Integrations
    • EventBridge Integration
    • Jira Integration
    • PagerDuty Integration
    • Slack Integration
    • SSO Integration
    • Webhook Integrations
  • Solution Docs
    • Evaluating the Cost Impact of a Changeset
    • NAT Gateway Visibility
    • Utilize nOps Resource Scheduler with EventBridge Integration to Reduce Costs Automatically
    • nOps Cost Optimization Recommendations
  • Installations
    • Install Amazon CloudWatch on EC2 Instances
    • Install nOps K8s Agent
    • Install nOps AWS Lambda Forwarder Agent
  • Developer Documentation
    • Getting started with the nOps developer API
    • GET and POST APIs for /projectaws and /projectaws/{id}
  • Configurations
    • Customize Your Settings
    • Dashboard
      • Create a Custom Dashboard
    • Rules
      • Create Custom Rules
    • Settings
      • Configure SSO
      • Configure SSO for Azure
      • Configure SSO for Okta
      • Configure SSO for OneLogin
      • Configure Weekly Reports
      • Configure Weekly Reports V3
      • Perform Default Tagging
    • Solution Providers
      • Create New Recommendations
  • FAQ
    • FAQ

Getting Started#

AWS IAM Policy - nOps Free Platform#

IAM Policy for nOps Platform

nOps requires safe, secure, and AWS-approved cross account access to your AWS accounts in order to give you the analysis, dashboards, and reports that you need. We only see what you want us to see in order to provide our services, no more, and we need you to give us permission first.

For AWS Payer/Management Account, nOps uses the following policies:

  1. AWS managed ReadOnlyAccess policy, which is completely managed by AWS and is updated periodically as AWS adds new services.
  2. Since the AWS managed ReadOnlyAccess policy contains some read access to sensitive data, nOps uses an explicit deny list which can be easily update for your own security requires. – Explicit Deny List
  3. Lastly, few other policies that are necessary to create the Cost and Usage Report for Cost Visibility, Well-Architected Review and placeholders to support automating the setup for nOps ShareSave Program. CUR , S3, Well-Architected , EventBridge and Organization,

For the AWS Linked accounts, nOps uses the following policies:

  1. AWS managed ReadOnlyAccess policy, which is completely managed by AWS and is updated periodically as AWS adds new services.
  2. Since the AWS managed ReadOnlyAccess policy contains some read access to sensitive data, nOps uses an explicit deny list which can be easily update for your own security requires. – Explicit Deny List
  3. Lastly, few other policies that are necessary for Well-Architected Review and placeholders to automating the setup for nOps ShareSave Program. Well-Architected and EventBridge

**Payer Account – IAM Policy JSON – Payer Account – JSON

**Linked Account – IAM Policy JSON – Linked Account – JSON

What? Why? and How Much?

The following tables describe each permission within the IAM policy:

  • First column: Permission name.
  • Second column: What the permission is?
  • Third column: Why the permission is important for nOps?
  • Forth column: What kind of access the permission gives to nOps?What? Why? and How Much?The following tables describe each permission within the IAM policy:
    • First column: Permission name.
    • Second column: What the permission is?
    • Third column: Why the permission is important for nOps?
    • Forth column: What kind of access the permission gives to nOps?
CURWhatWhyAccess (Full: Read, Limited: Write)
DescribeReportDefinitionsLists the AWS Cost and Usage Report available to this account.Used for creating reports in billing bucket setup.Read: All resource
PutReportDefinitionCreates a new report using the description that you provide.Used for creating reports in billing bucket setup.Write: All resource
EventBridgeWhatWhyAccess(Limited: Write)
CreateEventBusCreates a new event bus within your account.Allows nOps to create EventBridge integrations for automation. Required for ShareSave program.Write: All resources
OrganizationsWhatWhyAccess (Limited: Write, Full: List, Read)
InviteAccountToOrganizationSends an invitation to another AWS account, asking it to join your organization as a member account.Required for onboarding child accounts via CloudFormation stack during Automatic Setup and the ShareSave program.Write: All resources
S3WhatWhyAccess (Limited: Read)
HeadBucketAllows you to determine if a bucket exists and you have permission to access it.This permission allows nOps to see if the butcket for CUR already exists or do we need need to create one.Read
HeadObjectThe HEAD action retrieves metadata from an object without returning the object itselfThis permission allows nOps to only see the metadata of a bucket without allowing nOps to see the bucket’s contents.Read
SupportWhatWhyAccess (Limited: Read)
DescribeTrustedAdvisorCheckRefreshStatusesReturns the refresh status of the AWS Trusted Advisor checks that have the specified check IDs.Not used anymore.Read: All resources
DescribeTrustedAdvisorCheckResultReturns the results of the AWS Trusted Advisor check that has the specified check ID.Not used anymore.Read: All resources
DescribeTrustedAdvisorChecksReturns information about all available AWS Trusted Advisor checks, including the name, ID, category, description, and metadata.Not used anymore.Read: All resources
Well-ArchitectedWhatWhyAccess (Full access)
wellarchitectedGives full access to Well-Architected.nOps provides a full functionality dedicated for wellarchitected compliances and it requires full access of this component for managing cloud workloads.Full access

Explicit Deny

The following is the list of services for which nOps explicitly denies the permission:

  • ACM (AWS Certificate Manager)
  • API Gateway
  • AppConfig
  • AppFlow
  • AppStream
  • AppSync
  • Athena
  • Backup
  • Cassandra
  • Chime
  • Cloud9
  • Cloud Directory
  • CloudFront
  • CloudWatch
  • CodeArtifact
  • CodeBuild
  • CodeCommit
  • CodeDeploy
  • CodeStar
  • Cognito
  • Comprehend
  • Config
  • Connect
  • Data Pipeline
  • DAX (DynamoDB Accelerator)
  • DeepComposer
  • Device Farm
  • Direct Connect
  • Discovery
  • DMS (Database Migration Service)
  • DS (Directory Service)
  • DynamoDB
  • EC2 (Elastic Compute Cloud)
  • ECR (Elastic Container Registry)
  • EKS (Elastic Kubernetes Service)
  • Elastic Beanstalk
  • ES (Elasticsearch)
  • FIS (Fault Injection Simulator)
  • FMS (Firewall Manager)
  • Fraud Detector
  • GameLift
  • GeoLocation
  • Glue
  • GuardDuty
  • Inspector 2
  • Image Builder
  • IoT RoboRunner
  • IoT SiteWise
  • IVS (Interactive Video Service)
  • Kafka
  • Kendra
  • Kinesis
  • KMS (Key Management Service)
  • Lex
  • Lambda
  • License Manager
  • Lightsail
  • Logs
  • ML (Machine Learning)
  • Macie2
  • Mobile Hub
  • Nimble
  • Polly
  • Proton
  • QLDB (Quantum Ledger Database)
  • RDS (Relational Database Service)
  • Rekognition
  • Resilience Hub
  • RoboMaker
  • S3 (Simple Storage Service)
  • SageMaker
  • Schemas
  • SDB (SimpleDB)
  • Secrets Manager
  • Security Hub
  • SES (Simple Email Service)
  • Signer
  • SMS (Server Migration Service)
  • Snowball
  • SQS (Simple Queue Service)
  • S SM (Systems Manager Agent)
  • SSO (Single Sign-On)
  • Storage Gateway
  • Support
  • TimeStream
  • Transcribe
  • Transfer
  • WAF (Web Application Firewall)
  • WorkMail
ACM (AWS Certificate Manager)What
acm-pca:DescribeDenies all Describe permissions in ACM-PCA.
acm-pca:GetDenies all Get permissions in ACM-PCA.
acm-pca:ListDenies all List permissions in ACM-PCA.
acm:DescribeDenies all Describe permissions in ACM.
acm:GetDenies all Get permissions in ACM.
acm:ListDenies all List permissions in ACM.
API GatewayWhat
GETDenies all Get permission for API Gateway.
AppConfigWhat
GetConfigurationDenies the permission to view details about a configuration.
AppFlowWhat
DescribeConnectorDenies the permission to describe a connector registered in Amazon AppFlow.
ListConnectorDenies the permission to list connectors supported in Amazon AppFlow.
AppStreamWhat
DescribeDirectoryConfigsDenies the permission to retrieve a list that describes one or more specified Directory Config objects for AppStream 2.0.
DescribeUsersDenies the permission to retrieve a list that describes one or more specified users in the user pool.
DescribeSessionsDenies the permission to retrieve a list that describes the streaming sessions for a specified stack and fleet.
AppSyncWhat
GetDenies the permission to read resources in this service.
ListDenies the permission to list resources in this service.
AthenaWhat
GetDenies the permission to read resources in this service.
ListDenies the permission to list resources in this service.
BackupWhat
GetBackupVaultAccessPolicyDenies the permission to get backup vault access policy.
Cassandra (Keyspaces)What
SelectDenies the permission to SELECT data from table.
ChimeWhat
DescribeDenies the permission to read resources in this service.
GetDenies the permission to read resources in this service.
ListDenies the permission to list resources in this service.
Cloud9What
DescribeDenies the permission to read resources in this service.
GetDenies the permission to read resources in this service.
ListDenies the permission to read resources in this service.
Cloud DirectoryWhat
GetDenies the permission to read resources in this service.
ListDenies the permission to list resources in this service.
CloudFrontWhat
GetCloudFrontOriginAccessIdentityDenies the permission to get the information about a cloud front origin access identity.
GetFieldLevelEncryptionDenies the permission to get the field-level encryption configuration information.
GetKeyGroupConfigDenies the permission to get a key group configuration.
CloudWatchWhat
GetMetricDataDenies the permission to retrieve batch amounts of CloudWatch metric data and perform metric math on retrieved data.
GetMetricStreamDenies the permission to return the details of a CloudWatch metric stream.
ListMetricStreamsDenies the permission to return a list of all CloudWatch metric streams in your account.
CodeArtifactWhat
GetAuthorizationTokenDenies the permission to generate a temporary authorization token for accessing repositories in a domain.
ReadFromRepositoryDenies the permission to return package assets and metadata from a repository endpoint.
CodeBuildWhat
BatchGetDenies the permission to all BatchGet permissions.
ListSourceCredentialsDenies the permission to return a list of SourceCredentialsInfo objects.
CodeCommitWhat
BatchGetDenies the permission to all BatchGet permissions.
GetDenies the permission to Get permissions.
GitPullDenies the permission to pull information from an AWS CodeCommit repository to a local repo.
CodeDeployWhat
BatchGetDenies the permission to all BatchGet permissions.
GetDenies the permission to Get permissions.
CodeStarWhat
DescribeUserProfileDenies the permission to describe a user in AWS CodeStar and the user attributes across all projects.
ListUserProfilesDenies the permission to list user profiles in AWS CodeStar.
CognitoWhat
cognito-identity (Cognito Identity)Denies the permission to access any resources in this service.
cognito-idp (Cognito User Pools)Denies the permission to access any resources in this service.
cognito-sync (Cognito Sync)Denies the permission to access any resources in this service.
ComprehendWhat
DescribeDenies the permission to Describe resources.
ListDenies the permission to List resources.
ConfigWhat
BatchGetAggregateResourceConfigDenies the permission to return the current configuration items for resources that are present in your AWS Config aggregator.
BatchGetResourceConfigDenies the permission to return the current configuration for one or more requested resources.
SelectAggregateResourceConfigDenies the permission to accept a structured query language (SQL) SELECT command and an aggregator to query configuration state of AWS resources across multiple accounts and regions, performs the corresponding search, and returns resource configuration matching the properties.
SelectResourceConfigDenies the permission to accept a structured query language (SQL) SELECT command, performs the corresponding search, and returns resource configurations matching the properties.
ConnectWhat
DescribeDenies the permission to Describe resources
GetDenies the permission to Get resources.
ListDenies the permission to List resources.
Data PipelineWhat
DescribeObjectsDenies the permission to get the object definitions for a set of objects associated with the pipeline.
EvaluateExpressionDenies the permission to task runners to call EvaluateExpression, to evaluate a string in the context of the object.
QueryObjectsDenies the permission to query the specified pipeline for the names of the objects that match the specified set of conditions.
DAX (DynamoDB Accelerator)What
BatchGetItemDenies the permission to return the attributes of one or more items from one or more tables.
GetItemDenies the permission to the GetItem operation that returns a set of attributes for the item with the given primary key.
QueryDenies the permission to use the primary key of a table or a secondary index to directly access items from that table or index.
DeepComposerWhat
GetDenies all Get permissions in DeepComposer.
ListDenies all List permissions in DeepComposer.
Device FarmWhat
GetRemoteAccessSessionDenies the permission to retrieve the link to a currently running remote access session.
ListRemoteAccessSessionsDenies the permission to list the information of currently running remote access sessions.
Direct ConnectWhat
DescribeDenies all Describe permissions in Direct Connect.
ListDenies all List permissions in Direct Connect.
DiscoveryWhat
DescribeDenies all Describe permissions in Discovery.
GetDenies all Get permissions in Discovery.
ListDenies all List permissions in List.
DMS (Database Migration Service)What
DescribeDenies all DEscribe permissions in DMS.
ListDenies the permission to list all tags for AWS DMS resources.
DS (Directory Service)What
GetDenies all Get permission in Directory Service.
DynamoDBWhat
GetItemDenies permission to the GetItem operation that returns a set of attributes for the item with the given primary key.
BatchGetItemDenies permission to return the attributes of one or more items from one or more tables.
QueryDenies permission to use the primary key of a table or a secondary index to directly access items from that table or index.
ScanDenies the permission to return one or more items and item attributes by accessing every item in a table or a secondary index.
EC2 (Elastic Compute Cloud)What
GetConsoleScreenshotDenies the permission to retrieve a JPG-format screenshot of a running instance.
ECR (Elastic Container Registry)What
ecr:BatchGetImageDenies the permission to get detailed information for specified images within a specified repository.
ecr:GetAuthorizationTokenDenies the permission to retrieve a token that is valid for a specified registry for 12 hours.
ecr:GetDownloadUrlForLayerDenies the permission to retrieve that download URL corresponding to an image layer.
ecr-public:GetAuthorizationTokenDenies the permission to retrieve a token that is valid for a specified registry for 12 hours.
EKS (Elastic Kubernetes Service)What
DescribeIdentityProviderConfigDenies the permission to retrieve descriptive information about an Idp config associated with a cluster.
Elastic BeanstalkWhat
DescribeConfigurationOptionsDenies the permission to retrieve descriptions of environment configuration options.
DescribeConfigurationSettingsDenies the permission to retrieve a description of the settings for a configuration set.
ES (OpenSearch Service)What
ESHttpGetDenies the permission to send HTTP GET request to the OpenSearch APIs.
FIS (Fault Injection Simulator)What
GetExperimentTemplateDenies the permission to retrieve an AWS FIS Experiment Template.
FMS (Firewall Manager)What
GetAdminAccountDenies the permission to retrieve the AWS Organization master account that is associated with AWS Firewall Manager as the AWS Firewall Manager administrator.
Fraud DetectorWhat
BatchGetVariableDenies the permission to get a batch of variables.
GetDenies all Get permission in Fraud Detector.
GameLiftWhat
GetGameSessionLogUrlDenies the permission to retrieve the location of stored logs for a game session.
GetInstanceAccessDenies the permission to request remote access to a specified fleet instance.
GeoLocation (Location)What
ListDevicePositionsDenies the permission to retrieve a list of devices and their latest positions from the given tracker resource.
GlueWhat
GetSecurityConfigurationDenies the permission to retrieve a security configuration.
SearchTablesDenies the permission to retrieve the tables in the catalog.
GetTableDenies all GetTable permission in Glue.
GuardDutyWhat
GetIPSetDenies the permission to retrieve GuardDuty IPSets
GetMasterAccountDenies the permission to retrieve details of the GuardDuty administrator account associated with a member account.
GetMembersDenies the permission to retrieve the member accounts associated with an administrator account.
ListMembersDenies the permission to retrieve a list of GuardDuty member accounts associated with an administrator account.
ListOrganizationAdminAccountsDenies the permission to list details about the organization delegated administrator for GuardDuty.
Inspector 2What
GetConfigurationDenies the permission to retrieve information about the Amazon Inspector configuration settings for an AWS account.
Image BuilderWhat
GetImageDenies the permission to get an EC2 image.
IoT RoboRunnerWhat
GetDenies all Get permission in IoT RoboRunner.
IoT SiteWiseWhat
ListAccessPoliciesDenies the permission to lit all access policies for an identity or a resource.
IVS (Interactive Video Service)What
GetPlaybackKeyPairDenies the permission to get the playback keypair information for a specified ARN.
GetStreamSessionDenies the permission to get information about the stream session on a specified channel.
Kafka (MSK)What
GetBootstrapBrokersDenies the permission to get connection details for the brokers in an MSK cluster.
KendraWhat
QueryDenies the permission to query documents and faqs.
KinesisWhat
GetDenies all Get permission in Kinesis.
KMS (Key Management Service)What
DescribeKeyDenies the control to the permission to view detailed information about an AWS KMS key.
GetPublicKeyDenies the control to the permission to download the public key of an asymmetric AWS KMS Key.
LexWhat
GetDenies all Get permission in Lex.
LambdaWhat
GetFunctionConfigurationDenies the permission to view details about the version-specific settings of an AWS Lambda function or version.
License ManagerWhat
GetGrantDenies the permission to get a grant.
GetLicenseDenies the permission to get a license.
ListTokensDenies the permission to list tokens.
LightsailWhat
GetBucketAccessKeysDenies the permission to get the existing access key IDs for the specified Amazon Lightsail bucket.
GetCertificatesDenies the permission to view information about one or more Amazon Lightsail SSL/TLS certificates.
GetContainerImagesDenies the permission to view the container images that are registered to your Amazon Lightsail container service.
GetKeyPairDenies the permission to get information about a key pair.
GetRelationalDatabaseLogStreamsDenies the permission to get the log streams available for a relational database.
LogsWhat
GetLogEventsDenies the permission to list log events from the specified log stream.
StartQueryDenies the permission to schedule a query of a log group using CloudWatch Logs Insights.
ML (Machine Learning)What
GetMLModelDenies the permission to return an MLModel that includes detailed metadata, and data source information as well as the current status of the MLModel.
Macie2What
GetAdministratorAccountDenies the permission to retrieve information about the Amazon Macie administrator account for an account.
GetMemberDenies the permission to retrieve information about an account that’s associated with an Amazon Macie administrator account.
GetMacieSessionDenies the permission to retrieve information about the status and configuration settings for an Amazon Macie account.
SearchResourcesDenies the permission to retrieve statistical data and other information about AWS resources that Amazon MAcie monitors and analyzes.
GetSensitiveDataOccurrencesDenies the permission to retrieve occurrences of sensitive data reported by a finding.
Mobile HubWhat
ExportProjectDenies the permission to export the project configuration.
Nimble StudioWhat
GetStreamingSessionDenies the permission to get a streaming session.
PollyWhat
SynthesizeSpeechDenies the permission to synthesize speech.
ProtonWhat
GetEnvironmentTemplateDenies the permission to describe an environment template.
GetServiceTemplateDenies the permission to describe a service template.
ListServiceTemplatesDenies the permission to list service templates.
ListEnvironmentTemplatesDenies the permission to list environment templates.
QLDB (Quantum Ledger Database)What
GetBlockDenies the permission to retrieve a block from a ledger for a given BlockAddress.
GetDigestDenies the permission to retrieve a digest from a ledger from a given BlockAddress.
RDS (Relational Database Service)What
DownloadDenies all Download permission for RDS.
RekognitionWhat
CompareFacesDenies the permission to compare faces in the source input images with each face detected in the target input image.
DetectDenies all Detect permissions in Rekognition.
SearchDenies all Search permission in Rekognition.
Resilience HubWhat
DescribeAppVersionTemplateDenies the permission to describe the application version template.
ListRecommendationTemplatesDenies the permission to list recommendation templates.
RoboMakerWhat
GetWorldTemplateBodyDenies the permission to get the body of a world template.
S3 (S3 Object Lambda)What
s3-object-lambda:GetObjectDenies the permission to retrieve objects from Amazon S3.
SageMakerWhat
SearchDenies the permission to search for SageMaker objects.
Schemas (EventBridgeSchemas)What
GetDiscoveredSchemaDenies the permission to retrieve a schema for the provided list of sample events.
SDB (SimpleDB)What
GetDenies all Get permissions for SDB.
SelectDenies all Select permissions for SDB.
Secrets ManagerWhat
*Denies all permission in Secrets Manager.
Security HubWhat
GetFindingsDenies the permission to retrieve a list of findings from Security Hub.
GetMembersDenies the permission to retrieve the details of Security Hub member accounts.
ListMembersDenies the permission to retrieve details about Security Hub member accounts associated with the administrator account.
SES (SES v1, SES v2)What
GetTemplateDenies the permission to return the template object, which includes the subject line, HTML part, and text part for the template you specify.
GetEmailTemplateDenies the permission to return the template object, which includes the subject line, HTML part, and text part for the template you specify.
GetContactDenies the permission to return a contact from a contact list.
GetContactListDenies the permission to return contact list metadata.
ListTemplatesDenies the permission to list the email templates present in your account.
ListEmailTemplatesDenies the permission to list all of the email templates for your account.
ListVerifiedEmailAddressesDenies the permission to list all of the email addresses that have been verified.
SignerWhat
GetSigningProfileDenies the permission to return information about a specific Signing Profile.
ListProfilePermissionsDenies the permission to list the cross-account permissions associated with a Signing Profile.
ListSigningProfilesDenies the permission to list all Signing Profiles in your account.
SMS (Pinpoint SMS Voice V2)What
sms-voice:DescribeKeywordsDenies the permission to describe the keywords for a pool or origination phone number.
sms-voice:DescribeOptedOutNumbersDenies the permission to describe the destination phone numbers in an opt-out list.
sms-voice:DescribePhoneNumbersDenies the permission to describe the origination phone numbers in your account.
sms-voice:DescribePoolsDenies the permission to describe the pools in your account.
SnowballWhat
DescribeDenies all Describe permission for Snowball.
SQS (Simple Queue Service)What
ReceiveDenies all Receive permission in SQS.
S SM (Systems Manager)What
ssm-contacts:*
ssm:DescribeParametersDenies the permission to view details about a specified SSM parameter.
ssm:GetParameterDenies all GetParameter permission in Systems Manager.
SSO (Single Sign-On)What
DescribeDenies all Describe permissions in SSO.
GetDenies all Get permissions in SSO.
ListDenies all List permissions in SSO.
Storage GatewayWhat
DescribeChapCredentialsDenies the permission to get an array of Challenge-Handshake Authentication Protocol (CHAP) credentials information for a specified iSCSI target, one for each target-initiator pair.
SupportWhat
DescribeCommunicationsDenies the permission to return the communications and attachments for one or more AWS Support cases.
TimeStreamWhat
ListDatabasesDenies the permission to list databases in your account.
ListTablesDenies the permission to list tables in your account.
TranscribeWhat
GetDenies all Get permission in Transcribe.
ListDenies all List permission in Transcribe.
TransferWhat
DescribeDenies all Describe permission in Transfer.
ListDenies all List permission in Transfer.
WAF (WAF Regional)What
waf-regional:GetChangeTokenDenies the permission to retrieve a change token to use in create, update, and delete requests.
WorkMailWhat
DescribeUserDenies the permission to read details for a user.
GetMailUserDetailsDenies the permission to get the details of the user’s mailbox and account.
ListUsersDenies the permission to list the organization’s users.

IAM policy for nOps Last Updated: 12/17/2022

Payer Account – IAM Policy JSON

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cur:DescribeReportDefinitions",
"cur:DeleteReportDefinition",
"cur:PutReportDefinition",
"events:CreateEventBus",
"organizations:InviteAccountToOrganization",
"s3:HeadBucket",
"s3:HeadObject",
"support:DescribeTrustedAdvisorCheckRefreshStatuses",
"support:DescribeTrustedAdvisorCheckResult",
"support:DescribeTrustedAdvisorChecks",
"wellarchitected:*"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"acm-pca:Describe*",
"acm-pca:Get*",
"acm-pca:List*",
"acm:Describe*",
"acm:Get*",
"acm:List*",
"apigateway:GET",
"appconfig:GetConfiguration*",
"appflow:DescribeConnector*",
"appflow:ListConnector*",
"appstream:DescribeDirectoryConfigs",
"appstream:DescribeUsers",
"appstream:DescribeSessions",
"appsync:Get*",
"appsync:List*",
"athena:Get*",
"athena:List*",
"backup:GetBackupVaultAccessPolicy",
"cassandra:Select",
"chime:Describe*",
"chime:Get*",
"chime:List*",
"cloud9:Describe*",
"cloud9:Get*",
"cloud9:List*",
"clouddirectory:Get*",
"clouddirectory:List*",
"cloudfront:GetCloudFrontOriginAccessIdentity",
"cloudfront:GetFieldLevelEncryption*",
"cloudfront:GetKeyGroupConfig",
"cloudwatch:GetMetricData",
"cloudwatch:GetMetricStream",
"cloudwatch:ListMetricStreams",
"codeartifact:GetAuthorizationToken",
"codeartifact:ReadFromRepository",
"codebuild:BatchGet*",
"codebuild:ListSourceCredentials",
"codecommit:BatchGet*",
"codecommit:Get*",
"codecommit:GitPull",
"codedeploy:BatchGet*",
"codedeploy:Get*",
"codestar:DescribeUserProfile",
"codestar:ListUserProfiles",
"cognito-identity:*",
"cognito-idp:*",
"cognito-sync:*",
"comprehend:Describe*",
"comprehend:List*",
"config:BatchGetAggregateResourceConfig",
"config:BatchGetResourceConfig",
"config:SelectAggregateResourceConfig",
"config:SelectResourceConfig",
"connect:Describe*",
"connect:Get*",
"connect:List*",
"datapipeline:DescribeObjects",
"datapipeline:EvaluateExpression",
"datapipeline:QueryObjects",
"dax:BatchGetItem",
"dax:GetItem",
"dax:Query",
"deepcomposer:Get*",
"deepcomposer:List*",
"devicefarm:GetRemoteAccessSession",
"devicefarm:ListRemoteAccessSessions",
"directconnect:Describe*",
"directconnect:List*",
"discovery:Describe*",
"discovery:Get*",
"discovery:List*",
"dms:Describe*",
"dms:List*",
"ds:Get*",
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"ec2:GetConsoleScreenshot",
"ecr:BatchGetImage",
"ecr:GetAuthorizationToken",
"ecr:GetDownloadUrlForLayer",
"ecr-public:GetAuthorizationToken",
"eks:DescribeIdentityProviderConfig",
"elasticbeanstalk:DescribeConfigurationOptions",
"elasticbeanstalk:DescribeConfigurationSettings",
"es:ESHttpGet*",
"fis:GetExperimentTemplate",
"fms:GetAdminAccount",
"frauddetector:BatchGetVariable",
"frauddetector:Get*",
"gamelift:GetGameSessionLogUrl",
"gamelift:GetInstanceAccess",
"geo:ListDevicePositions",
"glue:GetSecurityConfiguration*",
"glue:SearchTables",
"glue:GetTable*",
"guardduty:GetIPSet",
"guardduty:GetMasterAccount",
"guardduty:GetMembers",
"guardduty:ListMembers",
"guardduty:ListOrganizationAdminAccounts",
"inspector2:GetConfiguration",
"imagebuilder:GetImage",
"iotroborunner:Get*",
"iotsitewise:ListAccessPolicies",
"ivs:GetPlaybackKeyPair",
"ivs:GetStreamSession",
"kafka:GetBootstrapBrokers",
"kendra:Query*",
"kinesis:Get*",
"kms:DescribeKey",
"kms:GetPublicKey",
"lex:Get*",
"lambda:GetFunctionConfiguration",
"license-manager:GetGrant",
"license-manager:GetLicense",
"license-manager:ListTokens",
"lightsail:GetBucketAccessKeys",
"lightsail:GetCertificates",
"lightsail:GetContainerImages",
"lightsail:GetKeyPair",
"lightsail:GetRelationalDatabaseLogStreams",
"logs:GetLogEvents",
"logs:StartQuery",
"machinelearning:GetMLModel",
"macie2:GetAdministratorAccount",
"macie2:GetMember",
"macie2:GetMacieSession",
"macie2:SearchResources",
"macie2:GetSensitiveDataOccurrences",
"mobilehub:ExportProject",
"nimble:GetStreamingSession",
"polly:SynthesizeSpeech",
"proton:GetEnvironmentTemplate",
"proton:GetServiceTemplate",
"proton:ListServiceTemplates",
"proton:ListEnvironmentTemplates",
"qldb:GetBlock",
"qldb:GetDigest",
"rds:Download*",
"rekognition:CompareFaces",
"rekognition:Detect*",
"rekognition:Search*",
"resiliencehub:DescribeAppVersionTemplate",
"resiliencehub:ListRecommendationTemplates",
"robomaker:GetWorldTemplateBody",
"s3-object-lambda:GetObject",
"sagemaker:Search",
"schemas:GetDiscoveredSchema",
"sdb:Get*",
"sdb:Select*",
"secretsmanager:*",
"securityhub:GetFindings",
"securityhub:GetMembers",
"securityhub:ListMembers",
"ses:GetTemplate",
"ses:GetEmailTemplate",
"ses:GetContact",
"ses:GetContactList",
"ses:ListTemplates",
"ses:ListEmailTemplates",
"ses:ListVerifiedEmailAddresses",
"signer:GetSigningProfile",
"signer:ListProfilePermissions",
"signer:ListSigningProfiles",
"sms-voice:DescribeKeywords",
"sms-voice:DescribeOptedOutNumbers",
"sms-voice:DescribePhoneNumbers",
"sms-voice:DescribePools",
"snowball:Describe*",
"sqs:Receive*",
"ssm-contacts:*",
"ssm:DescribeParameters*",
"ssm:GetParameter*",
"sso:Describe*",
"sso:Get*",
"sso:List*",
"storagegateway:DescribeChapCredentials",
"support:DescribeCommunications",
"timestream:ListDatabases",
"timestream:ListTables",
"transcribe:Get*",
"transcribe:List*",
"transfer:Describe*",
"transfer:List*",
"waf-regional:GetChangeToken",
"workmail:DescribeUser",
"workmail:GetMailUserDetails",
"workmail:ListUsers"
],
"Effect": "Deny",
"Resource": "*"
}
]
}
 {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::[INSERT CUR S3 BUCKET]",
"arn:aws:s3:::[INSERT CUR S3 BUCKET]/*"
],
"Effect": "Allow"
}
]
}

Linked Account – IAM Policy JSON

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"support:DescribeTrustedAdvisorCheckRefreshStatuses",
"support:DescribeTrustedAdvisorCheckResult",
"support:DescribeTrustedAdvisorChecks",
"wellarchitected:*"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"acm-pca:Describe*",
"acm-pca:Get*",
"acm-pca:List*",
"acm:Describe*",
"acm:Get*",
"acm:List*",
"apigateway:GET",
"appconfig:GetConfiguration*",
"appflow:DescribeConnector*",
"appflow:ListConnector*",
"appstream:DescribeDirectoryConfigs",
"appstream:DescribeUsers",
"appstream:DescribeSessions",
"appsync:Get*",
"appsync:List*",
"athena:Get*",
"athena:List*",
"backup:GetBackupVaultAccessPolicy",
"cassandra:Select",
"chime:Describe*",
"chime:Get*",
"chime:List*",
"cloud9:Describe*",
"cloud9:Get*",
"cloud9:List*",
"clouddirectory:Get*",
"clouddirectory:List*",
"cloudfront:GetCloudFrontOriginAccessIdentity",
"cloudfront:GetFieldLevelEncryption*",
"cloudfront:GetKeyGroupConfig",
"cloudwatch:GetMetricData",
"cloudwatch:GetMetricStream",
"cloudwatch:ListMetricStreams",
"codeartifact:GetAuthorizationToken",
"codeartifact:ReadFromRepository",
"codebuild:BatchGet*",
"codebuild:ListSourceCredentials",
"codecommit:BatchGet*",
"codecommit:Get*",
"codecommit:GitPull",
"codedeploy:BatchGet*",
"codedeploy:Get*",
"codestar:DescribeUserProfile",
"codestar:ListUserProfiles",
"cognito-identity:*",
"cognito-idp:*",
"cognito-sync:*",
"comprehend:Describe*",
"comprehend:List*",
"config:BatchGetAggregateResourceConfig",
"config:BatchGetResourceConfig",
"config:SelectAggregateResourceConfig",
"config:SelectResourceConfig",
"connect:Describe*",
"connect:Get*",
"connect:List*",
"datapipeline:DescribeObjects",
"datapipeline:EvaluateExpression",
"datapipeline:QueryObjects",
"dax:BatchGetItem",
"dax:GetItem",
"dax:Query",
"deepcomposer:Get*",
"deepcomposer:List*",
"devicefarm:GetRemoteAccessSession",
"devicefarm:ListRemoteAccessSessions",
"directconnect:Describe*",
"directconnect:List*",
"discovery:Describe*",
"discovery:Get*",
"discovery:List*",
"dms:Describe*",
"dms:List*",
"ds:Get*",
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"ec2:GetConsoleScreenshot",
"ecr:BatchGetImage",
"ecr:GetAuthorizationToken",
"ecr:GetDownloadUrlForLayer",
"ecr-public:GetAuthorizationToken",
"eks:DescribeIdentityProviderConfig",
"elasticbeanstalk:DescribeConfigurationOptions",
"elasticbeanstalk:DescribeConfigurationSettings",
"es:ESHttpGet*",
"fis:GetExperimentTemplate",
"fms:GetAdminAccount",
"frauddetector:BatchGetVariable",
"frauddetector:Get*",
"gamelift:GetGameSessionLogUrl",
"gamelift:GetInstanceAccess",
"geo:ListDevicePositions",
"glue:GetSecurityConfiguration*",
"glue:SearchTables",
"glue:GetTable*",
"guardduty:GetIPSet",
"guardduty:GetMasterAccount",
"guardduty:GetMembers",
"guardduty:ListMembers",
"guardduty:ListOrganizationAdminAccounts",
"inspector2:GetConfiguration",
"imagebuilder:GetImage",
"iotroborunner:Get*",
"iotsitewise:ListAccessPolicies",
"ivs:GetPlaybackKeyPair",
"ivs:GetStreamSession",
"kafka:GetBootstrapBrokers",
"kendra:Query*",
"kinesis:Get*",
"kms:DescribeKey",
"kms:GetPublicKey",
"lex:Get*",
"lambda:GetFunctionConfiguration",
"license-manager:GetGrant",
"license-manager:GetLicense",
"license-manager:ListTokens",
"lightsail:GetBucketAccessKeys",
"lightsail:GetCertificates",
"lightsail:GetContainerImages",
"lightsail:GetKeyPair",
"lightsail:GetRelationalDatabaseLogStreams",
"logs:GetLogEvents",
"logs:StartQuery",
"machinelearning:GetMLModel",
"macie2:GetAdministratorAccount",
"macie2:GetMember",
"macie2:GetMacieSession",
"macie2:SearchResources",
"macie2:GetSensitiveDataOccurrences",
"mobilehub:ExportProject",
"nimble:GetStreamingSession",
"polly:SynthesizeSpeech",
"proton:GetEnvironmentTemplate",
"proton:GetServiceTemplate",
"proton:ListServiceTemplates",
"proton:ListEnvironmentTemplates",
"qldb:GetBlock",
"qldb:GetDigest",
"rds:Download*",
"rekognition:CompareFaces",
"rekognition:Detect*",
"rekognition:Search*",
"resiliencehub:DescribeAppVersionTemplate",
"resiliencehub:ListRecommendationTemplates",
"robomaker:GetWorldTemplateBody",
"s3-object-lambda:GetObject",
"sagemaker:Search",
"schemas:GetDiscoveredSchema",
"sdb:Get*",
"sdb:Select*",
"secretsmanager:*",
"securityhub:GetFindings",
"securityhub:GetMembers",
"securityhub:ListMembers",
"ses:GetTemplate",
"ses:GetEmailTemplate",
"ses:GetContact",
"ses:GetContactList",
"ses:ListTemplates",
"ses:ListEmailTemplates",
"ses:ListVerifiedEmailAddresses",
"signer:GetSigningProfile",
"signer:ListProfilePermissions",
"signer:ListSigningProfiles",
"sms-voice:DescribeKeywords",
"sms-voice:DescribeOptedOutNumbers",
"sms-voice:DescribePhoneNumbers",
"sms-voice:DescribePools",
"snowball:Describe*",
"sqs:Receive*",
"ssm-contacts:*",
"ssm:DescribeParameters*",
"ssm:GetParameter*",
"sso:Describe*",
"sso:Get*",
"sso:List*",
"storagegateway:DescribeChapCredentials",
"support:DescribeCommunications",
"timestream:ListDatabases",
"timestream:ListTables",
"transcribe:Get*",
"transcribe:List*",
"transfer:Describe*",
"transfer:List*",
"waf-regional:GetChangeToken",
"workmail:DescribeUser",
"workmail:GetMailUserDetails",
"workmail:ListUsers"
],
"Effect": "Deny",
"Resource": "*"
}
]
}

Onboarding#

Adding an AWS account to nOps with Automatic Setup#

Setting up an AWS nOps account (Automatic Setup)

nOps requires safe, secure, and AWS-approved access to your AWS accounts in order to give you the analysis, dashboards, and reports that you need. We only see what we need, no more, and we need you to give us permission first.

In order to get started with nOps, the first step is to set up an AWS account for nOps via the Setup Wizard and subscribe to nOps on the AWS marketplace. We made the setup process as easy as possible for you while complying with AWS security best practices.

In Automatic Setup, nOps takes care of creating the IAM policy and the CloudFormation stack for the account.

Prerequisites

To successfully set up the AWS account(s), the AWS user must possess:

  • Access to the Payer account, if you are using AWS Organizations.
  • Permission to create and run an AWS CloudFormation stack.
  • Permission to create AWS Identity and Access Management (IAM) roles in your account.
  • The name of an Amazon S3 bucket where your AWS Cost and Usage Reports (CURs) will be written. (nOps will create a bucket with the provided name if one does not exist.)
  • CURs enabled in the account.

Note: If you add an AWS child account instead of a Payer Account, nOps will only see the cost details of the specific child account instead of the cost details of the entire organization.


Adding AWS Account(s)

When you log in to your nOps account for the first time, a pop-up screen will appear. This pop-up screen will guide you on how you can add your AWS account(s) to nOps. The screen consists of four distinct sections:

  1. Select Cloud Type
  2. Getting Started
  3. Link Cloud Accounts
  4. Fetching

Note:

If you only add a single account during the automatic setup and want to add more accounts later, once your single account is onboarded and you have access to the nOps platform:

  1. On the top-right corner of your nOps account, click on your user avatar to open a drop-down list.
  2. In the dropdown list, click Organization Settings. This will take you to the Cloud Accounts page.
  3. On the Cloud Accounts page, click + Add New Account.

Select Cloud Type

On this page, select the type of cloud account that you want to onboard and click Next. In the scope of this document, we will only explore the AWS Account option.

If you want to explore nOps first before you onboard any accounts, click Let Me Explore App.

Getting Started

In this section, you need to select the account setup method. In the scope of this article, we will deal with the Automatic Setup. Select the nOps Wizard Setup and click Next.


To learn more about Manual Setup, see Manual Setup. To learn more about IaaC Setup, see IaaC Multiple Accounts Setup.


Link Cloud Accounts

On the first page of this section, you can either select an AWS Organization account or a Single Account.

In the case of an AWS Organization account:

  • Make sure that you are logged into your AWS Master Payer Account.
  • Select the AWS Organization option.
  • Fill out the AWS Master Payer Account Name and S3 Bucket Name fields.
  • Click Setup Account.

If you select AWS Organization account, in the next section Link Cloud Account, you will have the option to onboard the child accounts associated with your AWS Organization Account.

In the case of Single Account:

  • Make sure that you are logged into your AWS account.
  • Select the Single Account option.
  • Fill out the AWS Account Name and S3 Bucket Name fields.
  • Click Setup Account.

When you click Setup Account, you will be redirected to your AWS > Create Stack page. All the fields on this page will be pre-populated. Click on the checkbox for “I acknowledge that AWS CloudFormation might create IAM resources”. nOps needs this permission to automate the creation of the IAM role.

After you click the checkbox, click on the Create button to start the data ingestion.


Note:

CF stack can run from any region you prefer. You can easily change the region of the CF stack from the CloudFormation screen once you launch it from nOps after your setup process is complete.


Once the stack is created, come back to nOps. nOps will check the account connectivity with AWS and check the CloudFormation stack permissions, and start the ingestion:

When data ingestion starts, in AWS console CloudFormation > Stacks > Stack Detail:

  1. If you have all the required permissions, as mentioned in the prerequisites section, the setup will start creating the stack with the status “CREATE_IN_PROGRESS”. Once the stack is created the “Status” will change to “CREATE_COMPLETE”. You can click the browser refresh button to check progress. Normally it takes 1 to 2 minutes to complete the process.
  2. If you don’t have proper permissions then you will see errors as shown in the screenshot below, and the stack will not be created. You can assign the necessary permissions to the AWS user or ask other teammates to rerun the setup.
  3. Once the stack creation is successful, log in to nOps Dashboard after the nOps integration (stack) creation process is completed

Fetching

Once your AWS accounts are linked successfully, you will see the following screen:

Once you log back into nOps, after data ingestion is complete, in the case of AWS Organization Account you will see the Setup Child Account page. With the help of your CUR, the setup process will automatically pull in the child accounts associated with your organization account.

To onboard a child account, click the Automatic Setup button against the child account that you want to add. If you don’t want to add a specific child account, click Skip Setup.


Note:

If you don’t have the required permissions to onboard a child account, click Invite team member to invite a member of your organization who has the required permissions.


If you click Automatic Setup, the setup process will show you a confirmation popup.

Before you click Proceed, make sure that you are logged in to the child account you are onboarding. When you click Proceed, you will be redirected to the AWS CloudFormation console with all the fields pre-filled:

Check the I acknowledge that AWS CloudFormation might create an IAM resources checkbox, and click Create Stack.

To take a look at the nOps CloudFormation template, see CloudFormation YAML Template.

In case, nOps detects more than 10 child accounts, you will get a prompt to use the nOps IaaC Setup. nOps recommends that in this case, you use the IaaC Setup instead of the Automatic Setup. To learn more about the IaaC setup, see IaaC Multiple Account Setup.

Once all the Child Accounts are added or skipped, click Next.

The setup process is now complete.


Note:

It can take up to 24 hours before you start seeing the different nOps dashboards and compliance views populated with data from your workload.


If you have any questions, please contact us at help@nops.io, or by phone at +1 866-673-9330.

On initial ingestion, nOps will pull the data from AWS accounts based on the following durations:

  • Cost data: 6 months look back + current month.
  • Rules: Current date.
  • CloudTrail Events: 14 days look back.

IAM and CloudFormation:

The IAM policy used by nOps is scoped to read and write permissions only.

Lambda function automates the creation of Role and Bucket (if it’s absent) for nOps integration to work.

The code for the Lambda function is available for your review. Click the link to get the YAML file.

If you are not comfortable with using the automated setup, you can use manual steps for the setup.

Article: Adding Your AWS account with the Manual Setup

View the latest IAM Policy here

Troubleshooting Tips:

  • Do you have a pop-up blocker on your browser? A pop-up blocker on your browser will stop nOps from redirecting you to an AWS account to create a stack.
  • There may have been a disconnect when creating the S3 stack causing the stack to have an error of ROLLBACK_ERROR. In this case, re-try the automatic setup, then delete the first one.
  • Is it pulling in incorrect data? Make sure that you are logging into the correct account. When you have multiple access to AWS accounts, it can import the wrong data. Ensure that you’re logged in to the correct account prior to starting the integration process.
  • If you belong to an Organization ( multiple accounts linked to a Master Account) ensure that you are logged into the Master account before running the wizard (so the billing data is populated) or having organizational billing data files exported to one of your buckets.

Related Articles:

How Child Accounts Work in nOps

Adding an AWS Account to nOps with Manual Setup#

Setting up an AWS nOps account (Manual Setup)

nOps requires safe, secure, and AWS-approved access to your AWS accounts in order to give you the analysis, dashboards, and reports that you need. We only see what you want us to see in order to provide our services, no more, and we need you to give us permission first.

In order to get started with nOps, the first step is to set up an AWS account for nOps via the Automatic Setup, Manual Setup, IaaC Multi Account Setup (CloudFormation), or IaaC Multi Account Setup (Terraform). We made the setup process as easy as possible for you while complying with AWS security best practices.

This Manual Setup is used in complex environments by experienced AWS administrators who need granular control and insight into the access that nOps require.

The Manual Setup approach is also useful for administrators who want to embed nOps access into their automation.

Prerequisites

You must have Admin role permissions in AWS before you can set up an AWS nOps account with Manual Setup.

Pro Tip: The Manual Setup is used in complex environments by experienced AWS administrators. Most customers opt to use the Automatic Setup procedure.

Adding AWS account (Manual Setup)

In the scope of this article, we will look at the Manual Setup procedure.

To use the Manual Setup for complex environments, follow these steps, in this order:

  1. Get the auto-generated External ID from nOps
  2. Setup a S3 billing bucket for Cost & Usage Reports
  3. Give nOps Permission and Create an IAM Policy
  4. Create an IAM Role
  5. Return to nOps to complete Manual Setup

Note: If you need any help with this process don’t hesitate to contact help@nops.io

Important information to copy and save

During this process, you should copy and save some information as you will need to enter it later. This information will be used in AWS and in nOps in order to complete the process:

  • Copy the External ID auto-generated through nOps.
  • Copy the ARN for IAM Policy that was created in the IAM Policy.
  • Copy Report name created for the Cost and Usage Report (CUR).
  • Copy Report path prefix from the S3 billing bucket creation.

Get the auto-generated External ID in nOps

When you log in to your nOps account for the first time, a pop-up screen will appear. This pop-up screen will guide you on how you can add your AWS account(s) to nOps. The screen consists of four distinct sections:

  1. Select Cloud Type
  2. Getting Started
  3. Link Cloud Accounts
  4. Fetching

If you only add a single account during the automatic setup and want to add more accounts later, once your single account is onboarded and you have access to the nOps platform:

  1. On the top-right corner of your nOps account, click on your user avatar to open a drop-down list.
  2. In the dropdown list, click Organization Settings. This will take you to the Cloud Accounts page.
  3. On the Cloud Accounts page, click + Add New Account.

Select Cloud Type

On this page, select the type of cloud account that you want to onboard and click Next.

In the scope of this article, we are only going to deal with the AWS Account setup.

Getting Started

In this section, you need to select the account setup method. In the scope of this article, we will deal with the Manual Setup. Select the Manual Setup and click Next.

When you click Next, you should see a screen similar to:

Copy the External ID and save it, you will need it later on.

If you want to add more accounts after the completion of your onboarding:

  1. Log into the nOps application.
  2. From your Profile name drop-down, in the top-right, click Organization Settings. If you are a Partner or Client Admin, select a client first, then click Organization Settings.
  3. In the Settings page, click + Add New Account, this will take you to the Cloud Account page.
  4. In the Cloud Account page, select AWS Account and click Next. This will take you to the Setup Method page.
  5. In the Setup Method page, select Manual Setup and click Next. This will take you to the Account Details (Manual Setup) page.
  6. In the Account Details (Manual Setup) page, an External ID is auto-generated for you and prefilled in the External ID field.

Important: Do not exit this page, you will return to this page later to complete the account setup.

Setup S3 billing bucket for Cost & Usage Reports

This section is divided into two steps, in the first step you will create the Cost & Usage Report, and in the second step you will create/select an S3 bucket for the Cost & Usage Report.

Note: Ensure that your AWS SCP configurations allow IAM administrators to make the changes.

Create the Cost & Usage Report

In this step you will create a Cost & Usage Report (also called Detailed Billing Reports or CUR) so that nOps can analyze your cost information:

  1. Login to your AWS Management Console account.
  2. Go to: Billing & Cost Management Dashboard
    On the left-hand side select Cost & Usage Report
    or, go to: https://console.aws.amazon.com/billing/home?#/reports
  3. Click on Create Report:
  4. Create a report name (such asnopsbilling-daily-gzip).
  5. In Additional report details, check the Include resource IDs checkbox (mandatory).
  6. In the Data refresh settings, checkthe Automatically refresh your Cost & Usage Report when charges are detected for previous months with closed bills checkbox.
  7. Click Next.

When you click Next, it will take you to the Delivery options page where you will create the S3 billing bucket.

Create/Select the S3 billing bucket

AWS needs a place to save your cost and usage/detailed billing files, a place that is safe for you. In this step, you will create an S3 bucket that secures your information:

  1. In the Delivery options (the page you reached at the end of the last section), click Configure. This will open the Configure S3 Bucket dialog box.
  2. In the dialog box, do one of the following:
    – Select an existing bucket: Use an existing bucket from your AWS Account.
    or
    – Create a new bucket: Create a new S3 bucket to be used specifically for nOps.
  3. Click Next to go to Verify Policy.
  4. Check the “I have confirmed that this policy is correct” checkbox.
  5. Click Save to save this policy. When the policy is saved, you will return to the Delivery options page.
  6. In the Delivery options page:
    • Click the Verify button to make sure the S3 bucket has an appropriate policy for the delivery report (step 3).
    • Enter the report path prefix (required) – Suggestion: nopsbilling
    • Choose Daily (mandatory) for Time granularity.
    • Select an option for Report versioning (optional) — Suggestion: Overwrite existing report.
    • Select GZIP as Compression type (mandatory).
      Important: You will need the Report Path Prefix name later when you are adding the AWS Account in nOps

7. Click Next.

8. Then, click Review and Complete.

Give nOps permission: Create the IAM policy

In this step, you’ll give nOps permission to read the Cost & Usage Report in the S3 bucket.

Note:

———-

AWS has a sophisticated security system for Identity and Access Management (IAM). There are no shortcuts for this. The nOps Wizard/Automatic Setup makes this easier with a CloudFormation Template, but the details provided in this article are for AWS practitioners who need more information for their own automation or auditing purposes.

———-

To manually create the IAM policy in order to allow nOps access:

  1. On the AWS Management Console, go to the Identity and Access Management screen.
  2. From the left navigation panel, click Policies.
  3. Click Create Policy.
  4. Switch to the ‘JSON’ tabandreplace the existing JSON script with the script provided in nOps IAM Policy (click this link to get the script).
  5. Click Next: Tags (optional).
  6. Click Next: Review.
  7. Click on ‘Review Policy’.
  8. Copy and save the ARN of the IAM role. This will be used later when you create the IAM Policy.
  9. Provide a name and description for the policy.
  10. Click on ‘Create Policy’.

Now, follow the same steps above to create another policy, this time for the S3 bucket that houses the Cost & Usage Report.

To create this policy, follow all the steps as is except for step 4. In step 4 use the following script:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<paste-bucket-name-here>",
"arn:aws:s3:::<paste-bucket-name-here>/*"
]
}
]
}

Make sure you replace <paste-bucket-name-here> with the name of the S3 bucket that houses the Cost & Usage Report to ensure policy efficacy.

You will attach both above policies to the IAM Role that you will create for nOps in the next step.

Creating IAM roles

IMPORTANT: You will need to enter the nOps auto-generated ID to create the IAM Role.

In order to allow the nOps SaaS application to use the IAM policy you just created, you need to create an IAM role.

To create a new role:

  1. On the AWS Management Console, go to the Identity and Access Management screen.
  2. From the left navigation panel, click Roles.
  3. Click Create Role.
  4. On Select trusted entity page, select AWS account.
  5. Click Another AWS account.
  6. For Account ID enter the nOps account ID (202279780353).
  7. Click Require external ID.
  8. For External ID, enter the string that was auto-generated for you by nOps in Step. The auto-generated External ID adds an extra level of security for you.
  9. Click Next.
  10. In Add permissions, select the two IAM policies you created in Give nOps permission: Create the IAM policy.
  11. Click Next.
  12. Enter a name and description for the role.
  13. Click Add tags, in orderto add tags to be associated with this role (optional).
  14. Click Create Role.

You have now completed the first part of the Manual Setup related to the AWS console.

Continue the Manual Setup of AWS account in nOps

Now that you have manually configured an IAM Role in your AWS account for access to AWS resources, the last step is to add that account to nOps.

Since you have already generated an External ID for nOps in Create an auto-generated External ID from nOps, you must now add information about the AWS role you created for nOps to fetch CloudTrail. You also need to add the S3 bucket so that nOps can fetch the billing data including the Cost & Usage Report.

Note: If you do not add a S3 bucket, your billing stats pages in nOps will not display any data.

  1. Start from where you left off in Create an auto-generated External ID from nOps section.
  2. In the Account Details (Manual Setup) page, enter a name for the AWS account you are adding to nOps.
  3. The External ID is auto-generated.
  4. Enter the ARN of the IAM role that you copied earlier in step 8 of Give nOps permission: Create the IAM policy section.
  5. Add the S3 bucket name. Make sure the S3 bucket name is the same as the S3 bucket you created for Cost & Usage Report in the AWS console.
  6. Enter the name of the Cost & Usage Report you created in step 4 of Create the Cost & Usage Report section.
  7. Enter the report prefix path that you created in step 6 of Create/Select the S3 billing bucket.
  8. Click Setup Account.

When adding the AWS account to nOps make sure you save the settings after filling all the fields.

Link Cloud Accounts

nOps will check the account connectivity with AWS, and start the ingestion:

Fetching

Once your AWS accounts are linked successfully, you will see the following screen:

Once you log back into nOps, after data ingestion is complete, in the case of AWS Organization Account you will see the Setup Child Account page. With the help of your CUR, the setup process will automatically pull in the child accounts associated with your Organization account:

To onboard a child account, click Automatic Setup. If you don’t want to add a specific child account, click Skip Setup

If you don’t have the required permissions to onboard a child account, click Invite team member to invite a member of your organization who has the required permissions.

If you click Automatic Setup, the setup process will show you a confirmation popup:

Before you click Proceed, make sure that you are logged in to the child account you are onboarding. When you click Proceed, you will redirect you to the AWS CloudFormation console with all the fields pre-filled:

Check the I acknowledge that AWS CloudFormation might create an IAM resources checkbox, and click Create Stack.

To take a look at the nOps CloudFormation template, see CloudFormation YAML Template.

If you decide not to give nOps the required access, you might face the following warning:

You can click Proceed and Setup later, but in this case you will not be able to access the features that depend on the required services.

In case, nOps detects more than 10 child accounts, you will see the following prompt:

nOps recommends that in this case, you use the IaaC Setup instead of the Automatic Setup. To learn more about the IaaC setup, see IaaC Multiple Account Setup.

Once all the Child Accounts are added or skipped, click Next.

The setup process is now complete:

Note: It can take up to 24 hours before you start seeing the different nOps dashboards and compliance views populated with data from your workload.

If you have any questions, please contact us at help@nops.io, or by phone at +1 866-673-9330.

On initial ingestion, nOps will pull the data from AWS accounts based on the following durations:

  • Cost data: 6 months look back + current month.
  • Rules: Current date.
  • CloudTrail Events: 14 days look back.

The Manual Setup is now complete.

Note: It can take up to 24 hours for data to populate. If you have any questions, please contact us at help@nops.io

Viewing Added AWS Accounts:

In nOps, you can view the list of all cloud accounts that you add to nOps.

To view the cloud accounts, go to UserName Dropdown (Top right) > Organization Settings > Cloud Accounts where:

  • For AWS accounts, the name of the S3 bucket [If added] is displayed, and also the “Last fetch” time of the S3 bucket.
  • For Azure accounts, the name of the account is displayed.

To edit an existing cloud account:

  • Go to UserName Dropdown (Top right) > Organization Settings > Cloud Accounts
  • Click the Edit button.

You can make any changes you need. Ensure that, when you are done making the changes, you click the Update Account button in order to save the changes.

Note: Editing the S3 bucket, for an AWS account, of an existing project can cause changes in cost data or undesired results.

Related Articles:

How Child Accounts Work in nOps

How to Add a Read Only IAM Policy

Adding AWS Account(s) to nOps with Terraform#

Onboarding via IaaC (Terraform)

nOps requires safe, secure, and AWS-approved access to your AWS accounts in order to give you the analysis, dashboards, and reports that you need. We only see what you want us to see in order to provide our services, no more, and we need you to give us permission first.

In order to credential and register multiple accounts, we leverage AWS Organizations, CloudFormation, StackSets, and Lambda.

Prerequisites

  • Admin role permissions in AWS in order to add AWS Payer and/or Child accounts to nOps using Terraform.
  • Access to the nOps public Github repository nOps Cloud Account Registration.

nOps Onboarding – Terraform Setup

When you log in to your nOps account for the first time, a pop-up screen will appear. This pop-up screen will guide you on how you can add your AWS account(s) to nOps. The screen consists of four distinct sections:

  1. Select Cloud Type
  2. Getting Started
  3. Link Cloud Accounts
  4. Fetching

1 – Select Cloud Type

On this page, select the type of cloud account that you want to onboard (AWS or Azure) and click Next.

In the scope of this article, we are going to deal with the AWS Account setup process.

2 – Getting Started

In this section, you need to select the account setup method. In the scope of this article, we will deal with the IaaC Multiple Accounts Setup. Select the IaaC Multiple Accounts Setup option and click Next.

3 – Link Cloud Accounts

The first page in the Link Cloud Accounts section informs you of the prerequisites. If this is your first time onboarding accounts in nOps, click Proceed to Create API Key.


Note:

If you are adding multiple accounts after you’ve already been onboarded, go to Create an API key to learn how you can get the API key.


On the second page in the Link Cloud Accounts section, enter:

  • An API key name
  • API Key description
  • Signature verification (optional)

After you add all the information, click Create API Key.

Once you click Create API Key nOps will generate an API key for you. Copy and save the API key for future use, and click Next.

When you click Next, nOps will start checking for its connectivity with your AWS accounts. In order for nOps to establish a connection with your accounts and start fetching data, you need to:

  1. Go to your AWS console to enable CloudFormation Stacksets in AWS Organization, and enable StackSets in AWS CloudFormation.
  2. Go to the nOps Cloud Account Registration Github repo and follow the instructions in Terraform Multiple Child Account Registration via Stackset section.

To enable CloudFormation StackSets in AWS Organizations, go to AWS Organizations > Services. If you see Access disabled against CloudFormation StackSets option, enable it.

To enable StackSets in AWS CloudFormation, go to CloudFormation > StackSets. If there is no prompt to enable StackSets, then skip this step. If you see an option to enable the StackSets, then enable it.

Before you continue with the onboarding process any further, make sure that you have:

  • nOps API Key
  • IDs of Organization Units you want to onboard.
  • Organization Root ID.
  • Payer Account ID.

Note:

To view the details about your organization including Organization Unit IDs, Root ID, and Master Payer Account ID, see Details About Your Organization.


nOps Terraform code is available in a public GitHub repository nOps Cloud Account Registration. You need to:

  1. Clone the repository.
  2. Navigate to `nops-cloud-account-registration/nops-aws-account-register/`.
  3. Follow the instructions in terraform-master-payer-regiaster/ folder to add a Payer account. Follow the instruction in terraform-multiple-child-accs-register-via-stacksets/ folder to add child account(s).

After you’ve gone through the installation steps in the public GitHub repository (nOps Cloud Account Registration), the data ingestion process will start.

You can monitor the progress from the terminal where you ran the Terraform commands or you can also monitor the progress from the AWS CloudFormation console. In your AWS CloudFormation console, find the stack with the name member-consolidated-nops-account-register, open it, and go to the Stack Instances tab to see the progress.

After a few minutes (depending on the number of accounts) all stacks should be in the state CURRENT.

4 – Fetching

Once your AWS accounts are linked successfully, the data-fetching process will start It might take several hours for nOps to fetch the data from your AWS account, in the meantime, you can click Let Me Explore to enter the nOps web application to see all the savings that nOps has to offer.

When the data fetching process is complete, you will see the message Your AWS accounts linked successfully on the setup screen.

The setup process is now complete and you will see the following screen.

If you have any questions, please contact us at help@nops.io, or by phone at +1 866-673-9330.

On initial ingestion, nOps will pull the data from AWS accounts based on the following durations:

  • Cost data: 6 months look back + current month.
  • Rules: Current date.
  • CloudTrail Events: 14 days look back.

Adding Multiple AWS Account to nOps with CloudFormation#

nOps requires safe, secure, and AWS-approved access to your AWS accounts in order to give you the analysis, dashboards, and reports that you need. We only see what you want us to see in order to provide our services, no more, and we need you to give us permission first.

In order to credential and register multiple accounts, we leverage AWS Organizations, CloudFormation, Stack, StackSets, and Lambda.

For multi-account setup, nOps recommends that use CloudFormation (this setup) instead of Terraform (intended for advanced users with specific requirements).

In this CloudFormation setup, the S3 bucket with the CUR is only required with the Master Payer account.

For a Master Payer account or a single account, during this setup, you will use the same CloudFormation YAML template for both. In order to add Child accounts, you will use a different CloudFormation template. Thus, you will create two stacks, one for the Master Payer, and one for the Child accounts.

Prerequisites

  • You must have Admin role permissions in AWS before you can add multiple AWS accounts to nOps using CloudFormation.
  • Access to the nOps public Github repository nOps Cloud Account Registration.

Once you’ve taken care of the prerequisites, the next steps are simple and straightforward.

Adding Multiple AWS Accounts (CloudFormation)

When you log in to your nOps account for the first time, a pop-up screen will appear. This pop-up screen will guide you on how you can add your AWS account(s) to nOps. The screen consists of four distinct sections:

  1. Select Cloud Type
  2. Getting Started
  3. Link Cloud Accounts
  4. Fetching

After the Link Cloud Accounts section, in the case of Adding Multiple AWS Accounts (CloudFormation), you need to perform these extra steps on the AWS console:

  1. Enable Stackset in AWS Organizations and AWS CloudFormation.
  2. Go to AWS CloudFormation and create a Stack for the Master Payer account.
  3. Log in to the Master Payer account and create a Stackset for the child/member accounts.

To create the Master Payer account Stack and the child/member account stacksets, use the CloudFormation YAML templates from nOps Cloud Account Registration public GitHub repository.

Pull the nOps Cloud Account Registration public repository to your local machine before you continue with the setup. You will need the CloudFormation YAML templates in the repository while creating the stacksets. You will also need the nOps API key.

Select Cloud Type

On this page, select the type of cloud account that you want to onboard and click Next.

In the scope of this article, we are going to deal with the AWS Account setup process.

Getting Started

In this section, you need to select the account setup method. In the scope of this article, we will deal with the IaaC Multiple Accounts Setup. Select the IaaC Multiple Accounts Setup option and click Next.

Link Cloud Accounts

The first page in the Link Cloud Accounts section informs you of the prerequisites. If you are adding multiple accounts after you’ve already been onboarded into nOps, go to Create an API key to learn how you can get the API key.

If this is your first time onboarding accounts in nOps, click Proceed to Create API Key:

On the second page in the Link Cloud Accounts section, enter:

  • An API key name
  • API Key description
  • Signature verification (optional)

After you add all the information, click Create API Key:

Once you click Create API Key nOps will generate an API key for you. Copy and save the API key for future use, and click Next:

When you click next, nOps will start checking for its connectivity with your AWS accounts. In order for nOps to establish a connection with your accounts and start the data ingestion, you need to:

  1. Enable Stackset in AWS Organizations and AWS CloudFormation.
  2. Go to AWS CloudFormation and create a Stack for the Master Payer account.
  3. Log in to the Master Payer account and create a Stackset for the child/member accounts.

Once you complete these two steps, come back to the nOps setup, and you will see the following screen. Click Refresh.

Enable Stacksets

To enable CloudFormation StackSets in AWS Organizations, go to AWS Organizations > Services. If you see Access disabled against CloudFormation StackSets, enable it.

Once enabled, against CloudFormation StackSets, you should see Access enabled:

To enable StackSets in AWS CloudFormation, go to CloudFormation > StackSets. If there is no prompt to enable StackSets, then skip this step.

If you see an option to enable the StackSets, then enable it:

Create a Stack for the Master Payer Account

Stack is a regional service for single account deployment, which in this case, is the Master Payer account. First, we will deploy a Cloudformation Stack in the Master Payer Account. Then we will log into the Organization Master Account to create a Stackset for the Child Accounts (OUs).


Note: It is important to note that an Organization Master Account != Master Payer account. A child account can also be a Master Payer account, but a child account can never be an Organization Master Payer Account.


To create a stack for the Master Payer Account account, go to AWS Console > CloudFormation > Stacks page and click Create stack > With new resources (standard).

The creation of a stack is divided into 4 steps:

In Step 1 (Specify template) —

  1. In the Specify template section, choose Upload a template file option.
  2. Click Choose file:
  3. When you click Choose file, AWS will open a navigation window for you to navigate and select the YAML template in your local machine. In your local copy of the repository navigate to nops-cloud-account-registration/nops-aws-account-register/cloudformation-single-acc-register/ and select the nops_register_aws_acc.yaml file.
  4. Click Next.

In Step 2 (Specify stack details) —

  1. Provide a Stack name.
  2. Enter the account name to register in nOps.
  3. Provide the nOpsAPIKey.
  4. Enter nOpsPrivateKey with a single slash instead of a double slash since we are using CloudFormation directly.
  5. Click Next.

In Step 3 (Configure stack options) — leave every field to its default and click Next.

In Step 4 (Review) —

  1. Review the stack details.
  2. Check the “I acknowledge that AWS CloudFormation might create IAM resources” checkbox (important).
  3. Click Create stack

Create a Stackset for the Child/Member Accounts

Stackset is multi-account and multi-region. To create and deploy a stackset for the Child accounts, make sure that you are logged into your Master Account.

To create a stackset for the Child/Member account, log in to AWS with your Master Payer Account, go to AWS Console > CloudFormation > Stacksets page, and click Create Stackset. The creation of a Stackset is divided into 5 steps:

In Step 1 (Choose a template) —

  1. In the Specify template section, choose Upload a template file option.
  2. Click Choose file:
  3. When you click Choose file, AWS will open a navigation window for you to navigate and select the YAML template in your local machine. In your local copy of the repository navigate to nops-cloud-account-registration/nops-aws-account-register/cloudformation-org-member-accounts-register/ and select the member_consolidated_aws_acc_nops_register.yamlfile.
  4. Click Next.

In Step 2 (Specify Stackset details) —

  1. Provide a StackSet name.
  2. Enter the account name to register in nOps.
  3. Provide the nOpsAPIKey.
  4. Enter nOpsPrivateKey with a single slash instead of a double slash since we are using CloudFormation directly.
  5. Click Next.

In Step 3 (Configure Stackset options) —

  1. In the Execution configuration section, select the Inactive option.
  2. Click Next.

In Step 4 (Set deployment options) —

  1. In the Add stacks to stack set section, select the Deploy new stacks option.
  2. In the Deploy targets section, select the Deploy stacks in organizational units option.
  3. Provide the organizational unit ID.
  4. In the Specify regions section, select your desired region.
  5. In the Deployment options section, select the Parallel option (optional).
  6. Click Next

In Step 5 (Review), review and create the stackset.

Fetching

Once the stacksets are created and your AWS accounts are linked successfully, you will see the following screen:

It might take several hours for nOps to fetch the data from your AWS account.

After the data is fetched, the setup process is now complete.

Note: It can take up to 24 hours before you start seeing the different nOps dashboards and compliance views populated with data from your workload.

If you have any questions, please contact us at help@nops.io, or by phone at +1 866-673-9330.

On initial ingestion, nOps will pull the data from AWS accounts based on the following durations:

  • Cost data: 6 months look back + current month.
  • Rules: Current date.
  • CloudTrail Events: 14 days look back.

Adding AWS Child Accounts in nOps#

There are three different methods of onboarding a child account:

  1. During Automatic Setup
  2. Via your nOps Organization Account
  3. With the help of Terraform Multi Account Registration (IaaC)

All of these onboarding methods give the child accounts IAM permissions that allow nOps to read metadata, CloudTrail, and everything else about the child accounts. It allows nOps to offer its monitoring and recommendations features for security, operations, reliability, and performance.

During Automatic Setup

You can add child accounts to nOps during the automatic setup process. When you add an AWS Organization Master Payer Account during the automatic process, nOps will automatically pull in child accounts associated with the Parent account. nOps learns about these accounts with the help of your Cost and Usage Report (CUR).

During the Automatic Setup process, you need to set up an AWS Organization account and add an AWS Master Payer Account:

After your Master Payer Account is linked successfully, the automatic setup will then ask you if you want to onboard the child account(s) right now. You can click on Automatic Setup against each child account to start the child account onboarding process:

If you click on Automatic Setup, it will redirect you to the respective AWS account for you to create a stack that nOps will use to access the child account. Please ensure that you are logged into the respective child AWS account when you click Proceed:

When you click on Proceed, you will be redirected to AWS > CloudFormation > Stacks > Create stack > Quick create stack page, with most of the information pre-filled. Click on Create Stack to start the onboarding process.

During the onboarding process of child accounts, nOps will not ask for the CUR since it has already been added with the AWS Organization Master Payer Account.

The setup process can take 1-2 hours to pull in data from AWS.

You can skip the onboarding of child accounts during this setup and add the accounts later.

nOps Organization Account

If you decided to skip onboarding of the child accounts during the Automatic Setup, you can still onboard your child accounts via your nOps Organization Account.

Click on your account at the top right corner of the page and go to Organization Settings > Cloud Accounts, there you will see a list of child accounts that nOps detected with the help of your Cost and Usage Report (CUR).

You can onboard each child account with Manual Setup or Automatic Setup:

Click Automatic Setup or Manual Setup to start the onboarding process.

If you click Automatic Setup, it will redirect you to the respective AWS account for you to create a stack that nOps will use to access the child account. Please ensure that you are logged into the respective child AWS account when you click Proceed:

When you click on Proceed, you will be redirected to AWS > CloudFormation > Stacks > Create stack > Quick create stack page, with most of the information pre-filled. Click on Create Stack to start the onboarding process.

If you click Manual Setup, you will be redirected to the Account Details (Manual Setup) page. Since nOps already has the information for S3 bucket that houses the CUR, the field for the S3 bucket will be locked. Click Update Account to start the onboarding process:

During the onboarding process of child accounts, nOps will not ask for the CUR since it has already been added with the AWS Organization Master Payer Account.

The setup process can take 1-2 hours to pull in data from AWS.

Terraform Multi Account Registration (IaaC)

Use the Terraform Multi Account Registration process when, along with your AWS Organization Master Payer Account, you have numerous child accounts that you want to onboard in nOps. This process makes it easier for you to onboard your child accounts with minimal effort.

You can simply provide the Organizational Unit IDs (OUs) of your child accounts during this setup and nOps will take care of the rest.

To learn about this onboarding process, see Adding Multiple AWS Accounts to nOps with Terraform.

Adding an Azure Account to nOps#

Use the following steps to setup Azure for nOps

Setup Azure for nOps

Content

  1. Prerequisites
  2. Ensure you have the required permissions in Azure Active Directory (AAD)
  3. Creating an Azure AD application
  4. Obtain the Tenant ID
  5. Create the application and obtain its ID
  6. Create and obtain the application secret
  7. Grant API access to your application
  8. Grant the Reader role to the application

Prerequisites

You must have access to register an Azure AD application in order to continue. To ensure that you have proper access to be able to complete the steps, your account will need specific permissions. There are two main paths to follow:

  • For Admin roles, you should already have access to register applications
  • For User roles, ensure that you can register applications, or you have been assigned the Application administrator or Application developer role for Azure AD

Use the following process to check which roles and permissions have been assigned to you.

Ensure you have the required permissions in Azure Active Directory (AAD)

  1. Log in to the Azure portal
  2. Select Azure Active Directory. in the left pane. (Or use search bar at the top of the page.)
  3. Select the overview pane to see your login information (2) and your role (marked in red).
    If you are an Administrator or Global Admin, you are all set – skip to Creating an Azure AD application section.
  4. If you are a User, click on your email address to open your user page. Click on Assigned roles in the left pane to check if you have either Application administrator or Application developer roles. If either one is listed, you will be able to finish this process.
  5. If no additional roles are assigned, you will only be able to continue if non-admin users have the option of creating apps. To check that, use the following steps:
  1. Log in to the Azure portal if you are not already logged in.
  2. In the left pane, select Azure Active Directory. (Or use the search bar at the top of the page)
  3. Click User Settings (1) in the left pane.
  4. In the right pane, review the App registrations settings (2) Yes – Allows any user in the Azure AD tenant to register AD apps.No – Only admin users can register AD apps.
  5. If the setting is Yes, continue to Creating an Azure AD application section.
  6. If App Registrations for your account is set to No, you do not have the correct permissions to continueIn this case, please contact your administrator to allow access using one of the steps listed below. Once the permissions are set you can proceed to the next section of creating the application.
    • Assign you application administrator/application developer role
    • Assign you to an administrator role for the entire tenant
    • Change the App registrations setting to Yes (simplest option)NOTE: Options are listed following the principal of least privilege, i.e., assigning application developer is safer than allowing all users to create apps on the tenant.

For more information about checking the Azure Active Directory permissions, see Check Azure Active Directory permissions.

Creating an Azure AD application

Successfully linking your account with nOps is a manual process of creating a customer application on your Azure tenant. The application you create and the information you will provide during nOps onboarding process allows access to your Azure account through REST APIs and for nOps a way to get the necessary information about your resources.

Using these steps, you will:

  • Create and correctly configure the Azure AD application
  • Create an application secret
  • Assign correct permissions to the application
  • Link the application with the appropriate subscriptions within your tenant

Obtain the Tenant ID

  1. Log in to the Azure portal
  2. In the left pane, select Azure Active Directory. (Or use search bar at the top of the page)
  3. Select the Overview page from the left pane.
  4. Under Tenant information, find Tenant ID and note it. You will need this ID later.

Ensure that you have the required permissions in Azure Active Directory (AAD)

  1. Using the steps documented above in Ensure you have the required permissions in Azure Active Directory (AAD), check that your permissions are correctly set in order to continue.

For more information about checking the Azure Active Directory permissions, see Check Azure Active Directory permissions.

Create an AAD application to access the Azure resources

  1. Log in to the Azure portal
  2. In the left pane, select Azure Active Directory. (Or use search bar at the top of the page)
  3. In the left pane of Azure Active Directory, click App Registrations and click New registration.
  4. Specify the following details and click Register.
    • Enter a Name for your application.
    • Under Supported account types, leave the default setting: Accounts in this organizational directory only.
    • Under Redirect URI (optional), leave the default drop-down, Web, and in the blank text field, type ‘https://localhost.’
Graphical user interface, application, Teams

Description automatically generated

Your AAD application is now created and added to Azure Active Directory.

For more information about creating the Azure Active Directory application, see Create an Azure Active Directory application.

Obtain the application ID, create and the application secret

  1. Log in to the Azure portal
  2. In the left pane, select Azure Active Directory. (Or use search bar at the top of the page)
  3. In the left pane of Azure Active Directory, click App Registrations, and in the right pane, select the application that you created in AAD.
  4. Note the Application (client) ID for the application.You will need to enter this later.
  5. To generate an authentication secret, click on Certificates & secrets in the left pane
  6. Under Client secrets, click + New client secret to create a new secret.
    Provide a basic description (which will be seen only by you) and expiry duration (a longer period is advised to avoid credential issues from your nOps app) for the secret and click Add. Once that is complete, you will see the newly created entry. From there, note the content of the Value field. IMPORTANT: Please pay close attention to this step and copy the correct field.

For more information about obtaining the application ID and generating the authentication secret, see Get application ID and authentication key.

Grant API access to your application

  1. Log in to the Azure portal
  2. In the left pane, select Azure Active Directory. (Or use search bar at the top of the page)In the left pane of Azure Active Directory, click App Registrations, and in the right pane, select the application that you created in AAD
  3. In the left pane, select API permissions, and in the right pane, select Add a permission.
  4. In the Select an API pane, search for Azure Service Management or Microsoft Graph and select it. The five required permissions to ensure the nOps application works properly and has the minimum required access are listed below:
    • Microsoft Graph – Delegated permission
      a. User.Read
    • Microsoft Graph – Application permissiona. AuditLog.Read.Allb. Directory.Read.Allc. Reports.Read.All
    • Azure Service Managementa. user_impersonationFor each section, after checking the permissions, click on the Add permissions in the bottom left corner.

The final screen with correctly configured permissions should look as seen below.

Graphical user interface, text, application, email

Description automatically generated

Note that for all the three Application Graph permissions, you will need to Grant admin consent using the button outlined in red.

Grant the Reader role to the application

Ensure that the account in your Azure subscription has the Owner or User Access Administration role to manage access to Azure resources. If your account is assigned the Contributor role, you cannot grant roles.

Only subscriptions that have the Reader role for the application will be displayed to you on nOps.

  1. Log in to the Azure portal
  2. In the left pane, select Subscriptions. (Or use search bar at the top of the page.)
  3. Locate and select the required subscription from the list.
  4. n the left pane, choose Access control (IAM) and click + Add followed by Add Role assignment.
  5. In the Add role assignment pane, select Reader role Assign access to ‘User, group, or service principal’. NOTE: If you are Global Admin and you don’t see this button/menu being enabled, you need to check the Azure Portal.
    Navigate to Azure Active Directory > Properties > Access Management for Azure resources and set the toggle to YES. Save the settings, sign out from the portal and sign back in to see this menu.
  6. Select your Application and click Save

For more information about granting the Reader role to the application, see Assign application to role.

The following information should be captured after successfully completing these steps:

  • TenantID
  • Application ID
  • Authentication secret

All values will be required for onboarding your Azure tenant to nOps.

Solution Providers#

Add a Client from Partner Portal#

In this article, you will learn how to add a client from “Partner Portal”. The process is simple and straightforward:

  1. Log in to your nOps partner account.
  2. Click on your profile name.
  3. Click Manage Clients, this will take you to the “Manage Clients” page.
  4. In the “Manage Clients” page, click + New Clients > Create a new client.
  5. In the “Create a new client” popup —
    1. Fill out the Enter Client Name field.
    2. Select a product from the Product list. Click on the highlighted icon to open the product list.
  6. Click Create client.

When you click Create client, the popup will close and the client list on the “Manage Clients” page will be updated with the name of the new client. You can click on the three dots highlighted in red to open the action list:

From the actions list, to add a client’s cloud account to nOps, click Go To This Account. You can also edit client details or cancel their subscription from the action list:

Invite a Client for Well-Architected Assessment#

In this article, you will learn how to invite a client in nOps for a well-architected review. The process is simple and straightforward:

  1. Log in to your nOps partner account.
  2. Click on your profile name.
  3. Click Manage Clients, this will take you to the “Manage Clients” page.
  4. In the Manage Clients page, click + New Clients > Invite a client for well-architected assessment.
  5. In the “Invite customer to nOps” popup, fill out the Email of the client and click Invite Client. The Email Subject, Email Body, and Invitation Link are already prefilled for you. You can edit the Email Subject and Email Body before sending the invitation.

Once you click Invite Client, the client will receive an email with an invitation link, similar to:

If they click on Sign up now, they will be redirect to the nOps sign up page.

After creating an account an signing up for nOps, the clients will also need to verify their email address. The verification email is sent to the same email address that was used for the invite.

When a client logs in into their nOps account for the first time they will see the “Set Up nOps” popup screen from where they can set up a cloud account with nOps. To learn more about the cloud account setup see, Adding an AWS account to nOps with Automatic Setup and Adding an AWS Account to nOps with Manual Setup.

Common Questions#

AWS IAM Policy - Auto Scaling#

As a part of the free nOps platform, we analyze your Cost and Usage Report (CUR) and provide you with Auto Scaling recommendations that you can automate.

In order to extract the full potential of the nOps Auto Scaling recommendations, you need permissions for two nOps features:

  • ShareSave Auto Scaling Recommendations: To get the scheduling recommendations.
  • Scheduler (One Time Configuration – Dynamic Configuration): To automate the scheduling of resources based on the ShareSave Auto Scaling recommendations.

Note: To enable Scheduler recommendations for any child account, it is necessary to get the account fully configured. I.e to enable the ReadOnly policy access at the child account level.

Access CUR Data to Analyze Utilization

The permissions required at the payer and Child account for basic ShareSave Auto Scaling analysis are:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ce:GetCostAndUsage",
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Scheduler Permissions: Lambda and Eventbridge

nOps requires AWS-managed AWSLambdaBasicExecutionRole permissions along with the following permission for Scheduler Lambda Function to automatically create schedules with the help of EventBridge:

These permissions are required on the child account or master account where the resources to be scheduled reside.

{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"autoscaling:UpdateAutoScalingGroup"
"ec2:StartInstances",
"ec2:StopInstances",
"events:PutEvents",
"rds:StopDBInstance",
"rds:StartDBInstance",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:GetObjectTagging",
"sts:AssumeRole",
"lambda:InvokeFunction",
"logs:PutLogEvents",
"logs:CreateLogGroup",
"logs:CreateLogStream",
],
"Resource": [
"*"
]
}]
}

To get the full CloudFormation YAML template, see nOps Scheduler Lambda Function.

AWS IAM Policy - ShareSave Resource Scheduler#

As a part of the free nOps platform, we analyze your Cost and Usage Report (CUR). As a part of the free nOps platform, we analyze your Cost and Usage Report (CUR) and provide you with scheduler recommendations that you can automate.

In order to extract the full potential of the nOps Scheduler, you need permissions for two nOps features:

  • ShareSave Resource Scheduler: To get the scheduling recommendations.
  • Scheduler using Eventbridge: To automate the scheduling of resources based on the ShareSave Resource Scheduler recommendations.

Note: To enable Scheduler recommendations for any child account, it is necessary to get the account fully configured. I.e to enable the ReadOnly policy access at the child account level.

Access CUR data to analyze utilization

The permissions required at the payer and Child account for basic ShareSave Resource Scheduler Analysis are:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ce:GetCostAndUsage",
],
"Effect": "Allow",
"Resource": "*"
}
]
}

nOps also required two CUR reports to be configured, with the following bucket access policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<paste-bucket-name-here>",
"arn:aws:s3:::<paste-bucket-name-here>/*"
]
}
]
}

Scheduler Permissions: Lambda and Eventbridge

nOps requires AWS managed AWSLambdaBasicExecutionRole permissions along with the following permission for Scheduler Lambda Function to automatically create schedules with the help of EventBridge:

These permissions are required on the child account or master account where the resources to be scheduled reside.

{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"events:PutEvents",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:GetObjectTagging",
"ec2:StartInstances",
"ec2:StopInstances",
"rds:StopDBInstance",
"rds:StartDBInstance",
"logs:PutLogEvents",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"autoscaling:UpdateAutoScalingGroup"
],
"Resource": [
"*"
]
}]
}

To get the full CloudFormation YAML template, see nOps Scheduler Lambda Function.

AWS Setup/Permissions - ShareSave: Auto-Pilot Risk-Free Commitment Management#

YAML file for CUR, S3 bucket, IAM Policy and nOps ShareSave linked accounts to your AWS Organization

Prerequisites

To take advantage of all nOps ShareSave Program, you must subscribe ONCE to nOps AWS Marketplace offering. If you have previously subscribe, you are all set to take advantage of all of nOps ShareSave Programs. If not, it’s easy to subscribe and nOps will not charge anything unless we save you money:

  1. Log in to your AWS Payer/Management account with your IAM user.
  2. Click the for Following link: nOps AWS Marketplace Offering
  3. Once on the nOps offering, Click “View Offering“, “Subscribe” and finally “setup Account“.. That’s it, your done, now time to save!

Steps for Automated Setup for Auto-Pilot Risk-Free Commitment Management

The automated setup is simple, easy and takes only 5 minutes:

  1. On your AWS Payer/Management account, nOps will create a new hourly Cost/Usage Report(CUR), S3 bucket for the CUR and nOps ShareSave Payer cross-account role/policy :
    1. Trusted Entity Type: AWS Account
    2. Trusted Entity: nOps
    3. Role name: nops-sharesave-payer
    4. IAM Policy for the Role: IAM Policy
    5. Automated Creation: YAML File
  2. nOps will link two nOps ShareSave accounts to your AWS Organization in your AWS Payer/Management account:
    1. ShareSave Compute <xx> – used by nOps to buy/sell EC2 3-year Standard Reserved Instances and buy 3-year Compute Savings Plans.
      1. Trusted Entity Type: AWS Account
      2. Trusted Entity: nOps
      3. Role name: nops-sharesave-ri
      4. IAM Policy for the Role: IAM Policy
      5. Automated Creation: NA – preloaded
    1. ShareSave Other <xx> – used by nOps to buy current generation/flexible 1-year Reserved Instances for RDS, Redshift, OpenSearch, and ElasticCache.
      1. Trusted Entity Type: AWS Account
      2. Trusted Entity: nOps
      3. Role name: nops-sharesave-ri
      4. IAM Policy for the Role: IAM Policy
      5. Automated Creation: NA – preloaded

How to kick off the automated setup and to begin Saving!

To begin savings via nOps ShareSave – Auto-Pilot Risk-free Commitment Management, follow these steps:

  1. Log in to your AWS Payer/Management account with your IAM user
  2. In another tab, Log in to your nOps Client and head over to the ShareSave dashboard:
  3. On the ShareSave Opportunity dashboard, in the List of Opportunities section, click the Configure Risk Free Commitment button:
  4. Once you click the Configure Risk Free Commitment button, the following pop-up will appear:
    1. If the pop-up does not appear, make sure that the pop-up isn’t being blocked by your browser.
    2. Before you click Proceed, make sure that you’re logged in to your AWS Payer/Management account in the same browser.
  5. After you click Proceed, nOps will take you to your AWS console’s CloudFormation Quick create stack page —with all the required information pre-filled
    Acknowledge and click Create stack:
    NOTE: The CloudFormation will take about 3-5mins to complete
  6. Once the CloudFormation has completed, go to AWS Organizations via the AWS Console to see the two nOps ShareSave Accounts added

ShareSave: Auto-Pilot Risk-Free Commitment Management configuration is now complete.


Note: It will take up to 7 days for nOps Auto-Pilot Risk-Free Commitment Management AI to begin buying commitments within the ShareSave accounts

Referenced Information from Above:

AWS Payer/Management Account, YAML File for CUR, S3 bucket, and nOps Role/Policy

AWSTemplateFormatVersion: "2010-09-09"

Description: |
nOps.io integration role for ShareSave accounts (updated September 12, 2022)
For more information visit http://help.nops.io

Parameters:
S3CurBucket:
Description: Format customername-sharesave
Type: String
ReportName:
Description: Format customername-sharesave
Type: String

Resources:
CURBucketCreate:
Type: "AWS::S3::Bucket"
DeletionPolicy: Retain
Properties:
BucketName: !Ref "S3CurBucket"

CURBucketPolicy:
Type: "AWS::S3::BucketPolicy"
DeletionPolicy: Retain
DependsOn: CURBucketCreate
Properties:
Bucket: !Ref "S3CurBucket"
PolicyDocument:
Statement:
- Action:
- "s3:GetBucketAcl"
- "s3:GetBucketPolicy"
Effect: Allow
Resource: !Join ["", ["arn:",!Ref AWS::Partition,":s3:::",!Ref "S3CurBucket"]]
Principal:
Service:
- billingreports.amazonaws.com
- Action:
- "s3:PutObject"
Effect: Allow
Resource: !Join ["", ["arn:",!Ref AWS::Partition,":s3:::",!Ref "S3CurBucket","/*"]]
Principal:
Service:
- billingreports.amazonaws.com

CURCreate:
Type: "AWS::CUR::ReportDefinition"
DeletionPolicy: Retain
DependsOn: CURBucketPolicy
Properties:
ReportName: !Ref "ReportName"
RefreshClosedReports: True
S3Bucket: !Ref "S3CurBucket"
S3Prefix: sharesave
S3Region: us-east-1
TimeUnit: HOURLY
ReportVersioning: OVERWRITE_REPORT
AdditionalArtifacts:
- REDSHIFT
Compression: GZIP
Format: textORcsv

nOpsShareSaveRole:
Type: "AWS::IAM::Role"
DependsOn: CURBucketPolicy
Properties:
RoleName: nops-sharesave-payer
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
AWS:
- arn:aws:iam::727378841472:root
Action:
- "sts:AssumeRole"
Path: /
Policies:
- PolicyName: nops-sharesave-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: S3ViewCUR
Effect: Allow
Action:
- "s3:ListBucket"
Resource: !Join ["", ["arn:",!Ref AWS::Partition,":s3:::",!Ref "S3CurBucket"]]
- Sid: S3AccessCUR
Effect: Allow
Action:
- "s3:GetObject"
- "s3:PutObject"
- "s3:DeleteObject"
Resource: !Join ["", ["arn:",!Ref AWS::Partition,":s3:::",!Ref "S3CurBucket","/*"]]
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/AWSSupportAccess"

AWS Payer/Management Account: IAM Policy

 {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::<YOUR BUCKET NAME HERE>",
"Effect": "Allow",
"Sid": "S3ViewCUR"
},
{
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<YOUR BUCKET NAME HERE>/*",
"Effect": "Allow",
"Sid": "S3AccessCUR"
}
]
}

nOps ShareSave Accounts: IAM Policy nOps uses to Buy/Sell Reservations and Savings Plans

{
"Version": "2012-10-17",
"Statement": [

{
"Action": [
"ec2:DescribeReservedInstances",
"ec2:DescribeReservedInstancesListings",
"ec2:DescribeReservedInstancesModifications",
"ec2:DescribeReservedInstancesOfferings",
"ec2:ModifyReservedInstances",
"ec2:PurchaseReservedInstancesOffering",
"ec2:CreateReservedInstancesListing",
"ec2:CancelReservedInstancesListing",
"ec2:GetReservedInstancesExchangeQuote",
"ec2:AcceptReservedInstancesExchangeQuote",
"rds:DescribeReservedDBInstances",
"rds:DescribeReservedDBInstancesOfferings",
"rds:PurchaseReservedDBInstancesOffering",
"support:*",
"SavingsPlans:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Azure Roles and Permissions#

We only see what we need, no more, and we need you to give us the permission first.

nOps requires safe, secure, and Azure-approved access to your accounts in order to give you the analysis, dashboards, and reports that you need. We only see what we need, no more, and we need you to give us the permission first.

Required Role

To complete the nOps Azure setup, the user must possess one of the following roles to create and manage the nOps application registration:

  1. Admin or Global Admin role.
  2. Application Administrator or Application Developer role.

An Application Administrator can create and manage all aspects of app registrations and enterprise apps.

An Application Developer can create application registrations independent of the ‘Users can register applications’ setting.

Application Administrator or Application Developer, either one is sufficient for the nOps application registration.

Type? What? and Why?

The following tables describe each permission that nOps requires:

  • Permission name.
  • Permission Type:
    • Application permissions allow an application in Azure Active Directory to act as its own entity, rather than on behalf of a specific user.
    • Delegated permissions allow an application in Azure Active Directory to perform actions on behalf of a particular user.
  • What the permission is?
  • Why the permission is important for nOps?
Permissions – Azure Service ManagementPermission TypeWhatWhy
user_impersonationDelegatedAllows the application to access the Azure Management Service API acting as users in the organization.Limited permission ofuser_impersonation is needed for authentication flow to work with the Azure Management API.
Permissions – Microsoft GraphPermission TypeWhatWhy
AuditLog.Read.AllApplicationAllows the app to read and query your audit log activities, without a signed-in user.Anticipated for future use. No current usage.
Directory.Read.AllApplicationAllows the app to read data in your organization’s directory, such as users, groups and apps, without a signed-in user.Default permission attached to the tenant.Required for detecting various compliance issues related to users and groups through nOps.
Reports.Read.AllApplicationAllows an app to read all service usage reports without a signed-in user. Services that provide usage reports include Office 365 and Azure Active Directory.Permission mostly required for accessing billing information which is one of the main nOps functionalities. Also enables nOps to fetch usage reports of services like credentialUserRegistrationDetails (detects compliance issues like registration, authentication methods, MFA and others)
User.ReadDelegatedAllows users to sign-in to the app, and allows the app to read the profile of signed-in users. It also allows the app to read basic company information of signed-in users.This grants permission to read the profile of the signed-in user which is simply accessing information using the client_id and client_secret linked to nOps.

User Guides#

Dashboard#

nOps Dashboard#

Use nOps Cost Control facilities to manage and optimize your cloud costs.

nOps provides a variety of views to aid you in managing and controlling your cloud costs. Calculating your cloud costs goes beyond just adding up resource costs for a specific region. nOps Cost Control also shows you related costs and credits, to give you a clear picture of your monthly spend.

To access nOps Cost facilities:

Partners:

  1. Log in to your nOps account.
  2. From Profile Menu select Manage Clients.
  3. Click on a client name and go to the client account.
  4. Click on Cost in the top menu.

Customers:

  1. Log into your nOps account.
  2. Click on Cost, in the top menu, and choose one of the options in the drop-down.

This topic describes:

  • nOps Dashboard
  • Overall Cost Features
  • nOps Analysis Tools in Cloud Resources Cost

nOps Dashboard

The nOps Dashboard summarizes changes in your cloud costs and allows you to choose what period of time to compare. The cost changes are broken out by:

  • Cloud Accounts
  • Cloud Services
  • Usage types
  • Operations

The order of these sections is constant and reflects the general impact to your cloud spend. Individual line items are listed within each group ordered by spend,

Buttons at the upper right enable you to choose which cloud-account types to include and also whether the cost comparison should be monthly, month-to-date,or weekly.

Note that the Cloud Accounts summary at the top includes links to give you detail of potential savings and underutilized resources.

Note also the View More link at the bottom left of the individual items partial list in each group, to show all records for that group.

Overall Cost Control Options

OptionDescriptionView
Cloud Resources CostOverall costs for the last month (or change date range under Filters), plus options to filter by resource type, cloud account, region, and other parameters.Monthly and daily spend by region, specific cloud service, usage type, specific operations, and specific tags; with various filtering options (Filters box at left), and a log of cost changes.
Chargebacks CenterChargebacks for departments.
Container CostGranular costs for EKS.Status of instances and costs for EKS pods and services.
Spot AdvisorFind EC2 instances that can be migrated to use Spot instances.View instances that can be converted to Spot Instance pricing.
Tag ExplorerA label assigned to a resource, consisting of a key and an optional value, both of which you define.Tags enable you to categorize your resources in different ways. For example: Tag resources used to support an app, to track costs for the app.
Resource RightsizingFind over-resourced EC2 instances and tune them down to usage to save money.Recommendations for rightsizing.
Commitment ManagementView and manage RI planning, RI recommendation, RI usage, and savings plan recommendation.Filters you created, recommendation types, offering classes, and payment types and terms.

nOps Opportunities Dashboard#

Expanded dashboards for Partners

The Opportunities dashboard contains information about changes made by your clients to their cloud environments. Only Partners (users and Admins)can access this dashboard.

The dashboard contains a number of charts that you can scroll by using the arrow keys. The charts are followed by a List of Opportunities. The list items change based on what you click or select.

Access the Opportunities dashboard using the following path:
Log in as a Partner > Click on the Opportunities icon from the Partner Dashboard or click the Opportunities tab at the top menu.

Use the dashboard to see which services your clients use most (or least) and find opportunities to engage with them.

As a Partner you now have better visibility into your clients cloud environments to assist them to achieve their cost savings goals and to make rightsizing recommendations.

This article contains the following topics:

The Dashboard View

How to use the charts

Create a Category

The Opportunities List

The Dashboard View

The dashboard displays charts separated by the following tabs: Categories, Services, and Customers. Click a tab to see the default charts. The Categories and Services tabs allow you to view categories and services used by your clients. The By Customers tab lists the top ten clients and their use of services and categories.

There are two types of categories:

  • Categories that are mapped to AWS database service categories.
    Note: These are predefined and cannot be changed, edited or deleted.
  • Custom categories that you define. You can create categories based on your requirements. These can be edited to add or delete services, or to delete the category.
    For example: You can create a category to customize for example the types of services the use of EC2 instances that are HIPAA compliant. See To create a Category for more information.
    Note: Clicking on any custom categories (defined by you) will not change the List of Opportunities view. That functionality is available only for the default charts.

How to use the charts

Information on the default charts can be viewed in a number of ways.
Note: Clicking, removing, or changing a selection in a chart changes the information displayed in the List of Opportunities section.

  • Click on a chart group, then hover on a section to view information about it.
  • Click on a section of a pie chart. Information about your selection appears in the updated List of Opportunities.
  • Click the navigation arrows under the chart key to scroll through the key to find an item.
  • Click an item in the chart key to remove that item from the chart view.
    The removed item no longer appears in the list.
  • Click the search box drop-down to display information about one or more items in a category, service. Reset the view by clicking on Clear Filters on the top right of the chart list.
  • Use the date filter to view changes made within a date range you specify. The default date range for the charts is the last 15 days.

To create a Category

  1. Click the Add new Category button to open the dialog.
  2. Add a Name and select one or more Service from the drop-down. The services are grouped by type of AWS category such as Container, Storage, Networking and Content Delivery, and so on. It is easier to find an AWS Service by typing the first few letters to narrow the choices before making a selection.
  3. Click Add Category for the system to begin assembling the data for the new category.

IMPORTANT: Custom categories do not change the list view when clicking a selection or removing an item in the key. That functionality is only available for Default Categories and Services.

Edit or Delete a Category

Note: Only custom categories (that you created) can be edited or deleted.

  1. Click the settings icon on the top right, for any custom category you created.
  2. At the Edit Category dialog, edit the selections (including the Category Name) or click Delete to delete it.

List of Opportunities

The list contains data about which new services were added by your customers within the past fifteen days. You can customize the date range and use filters as described below:

  • Change the date range by using the Date drop-down at the top right of each section.
  • Filter the results by selecting one or more items from the Category, Service, or Client drop-downs. Enter the first few letters to narrow the results.
    The drop down only lists default categories. The list data is filtered by your selections and changes immediately. Selecting a Service may change the options available on the Category filter.
  • Change the list view by removing or adding items to the drop-down.
  • Download the information by clicking the Download List button.
  • Click an opportunity to expand the information. View information such as the Customer name the Category the selected Services type and their spend. The & 4 others displays any custom categories you created where this category also appears.
  • Click Full Usage History or View all other services, to see details on a Usage History dialog for details on Spend and Other Used Services.
  • The Opportunity Spend tab under the Usage History dialog, displays the total spend by each resource flagged in this opportunity.

nOps Search DSL#

Write a query in the search bar to find specific resources. Learn how to create search queries

How to Use nOps Search DSL

Searching for resources in nOps can be a daunting task. There are a lot of resources in an AWS account, that have been captured by nOps. The ability to pinpoint the specific resource or set of resources is important. Using a single search query might not reveal many results. Using the nOps DSL queries makes this much easier and faster. This tutorial shows some principles and how to search with the DSL Queries

Supported comparison operators:
=, != (int, float and dates, strings)
>, >= (int, float and dates)
<, <= (int, float and dates)

Supported bool operators (case-insensitive):
and, &
or, |

Date format:
yyyy-mm-dd
Example: some_date = 2020-05-02

“IN” operator:
field IN [int, “str”, float]
Example: type in [ec2, “aws.s3”, aws_ebs]

Bool logical order with brackets: The same thing as in other languages: use “()” to show the execution order of logical blocks.
Example. (type = ec2 or type = s3) and cost.usagetype.cost > 13.37

Examples:

Find all EC2 instances with VPC equals “vpc-092eb20aa971a6e0b” or “vpc-d5339bb0” and with the total cost less than or equal to $500:
vpc_id in [“vpc-092eb20aa971a6e0b”, “vpc-d5339bb0”] and type=ec2 and cumulative_cost &lt;= 500

Find all EC2 instances with the Average CPU Utilization less than 30% for the last 3 months
type=ec2 and utilization.months.months_3.cpu.cpu_usage < 30.00

Find all EBS volumes without Encryption enabled
type=ebs and encrypted=false

Find all EC2 instances launched after some date:
type=ec2 and launch_time>2020-01-01

These are a list of the fields that can be used for query. Subsequently this list will be limited to some pre-defined set of fields only

'Name',

'active',

'active_services_count',

'allocation_id',

'arn',

'association_id',

'attached',

'attached_policies.AttachedPolicies.PolicyArn',

'attached_policies.AttachedPolicies.PolicyName',

'attached_policies.HasAttachedPolicies',

'attr_defs.AttributeName',

'attr_defs.AttributeType',

'availability_zone',

'availability_zones',

'available_ip_address_count',

'backup_retention_period',

'billing',

'billing_type',

'bucket_name',

'canonical_hosted_zone_name',

'canonical_hosted_zone_name_id',

'cf_stack',

'cidr_block',

'cloudtrail.first_event.created_by',

'cloudtrail.first_event.event_id',

'cloudtrail.first_event.event_name',

'cloudtrail.first_event.event_time',

'cloudtrail.meta.type',

'codesize',

'cost.available_operation',

'cost.available_usagetype',

'cost.dates_available',

'cost.meta.account',

'cost.meta.availability_zone',

'cost.meta.region',

'cost.meta.resource',

'cpu',

'cpu*',

'cpu_2592000',

'cpu_3600',

'cpu_600',

'cpu_604800',

'cpu_86400',

'create_date',

'create_time',

'created_time',

'creation_date',

'ct',

'cumulative_cost',

'current_month_cost',

'db_cluster_identifier',

'db_instance_class',

'desc',

'description',

'dhcp_options_id',

'dns_name',

'domain',

'ec2_ids',

'encrypted',

'endpoint.Address',

'endpoint.HostedZoneId',

'endpoint.Port',

'engine',

'first_seen',

'groups.Groups.Arn',

'groups.Groups.CreateDate',

'groups.Groups.GroupId',

'groups.Groups.GroupName',

'groups.Groups.Path',

'groups.HasGroups',

'health_check.access_point',

'health_check.healthy_threshold',

'health_check.interval',

'health_check.target',

'health_check.timeout',

'health_check.unhealthy_threshold',

'id',

'instance_id',

'instance_name',

'instance_state',

'instance_tenancy',

'instance_type',

'instances_sg',

'iops',

'ip_address',

'is_as',

'is_default',

'is_eks',

'is_multi_region',

'item_count',

'item_id',

'key_name',

'last_updated',

'lastmodified',

'launch_time',

'listeners.instance_port',

'listeners.instance_protocol',

'listeners.load_balancer',

'listeners.load_balancer_port',

'listeners.protocol',

'listeners.ssl_certificate_id',

'log_file_validaton_enabled',

'memsize',

'mfa_enabled',

'monitored',

'multi_az',

'name',

'network_interface_id',

'network_interface_owner_id',

'node_type',

'number_of_mount_targets',

'number_of_nodes',

'owner_id',

'password_last_used',

'path',

'performance_mode',

'policies.HasPolicies',

'policies.Policies',

'policies.other_policies',

'policy_document.Statement.Action',

'policy_document.Statement.Condition.StringEquals.sts:ExternalId',

'policy_document.Statement.Effect',

'policy_document.Statement.Principal.AWS',

'policy_document.Statement.Principal.Federated',

'policy_document.Statement.Principal.Service',

'policy_document.Statement.Sid',

'policy_document.Version',

'preferred_backup_window',

'private_dns_name',

'private_ip',

'private_ip_address',

'product',

'project.access_key',

'project.access_type',

'project.account_number',

'project.client',

'project.id',

'project.name',

'project.role_name',

'project.status',

'public_ip',

'publicly_accessible',

'region',

'remediation.action',

'remediation.compliance_type',

'remediation.function',

'remediation.id',

'remediation.integration',

'remediation.message_id',

'remediation.modified',

'remediation.status',

'remediation.user',

'replica_role',

'role_id',

'rules.from_port',

'rules.grants.resource_id',

'rules.grants.resource_name',

'rules.grants.type',

'rules.grants.value',

'rules.ip_protocol',

'rules.to_port',

'runtime',

's3_bucket_access_log_enabled',

'scheme',

'search',

'security_alerts.from_port',

'security_alerts.grant.type',

'security_alerts.grant.value',

'security_alerts.ip_protocol',

'security_alerts.status',

'security_alerts.to_port',

'security_groups',

'size',

'snapshot_id',

'state',

'state_change_time',

'status',

'storage_type',

'subnet_id',

'subnets',

'suggest',

'tab_size_bytes',

'tags.key',

'tags.value',

'tags_key.key',

'tags_key.value',

'throughput_mode',

'type',

'user_id',

'user_name',

'user_name_lowercase',

'utilization.consumed_capacity_percents.read',

'utilization.consumed_capacity_percents.write',

'utilization.cpu.cpu_usage',

'utilization.disk.read_iops',

'utilization.disk.read_ops',

'utilization.disk.total_io',

'utilization.disk.write_iops',

'utilization.disk.write_ops',

'utilization.io_limit.percents',

'utilization.months.months_1.consumed_capacity_percents.read',

'utilization.months.months_1.consumed_capacity_percents.write',

'utilization.months.months_1.cpu.cpu_harmonic_mean',

'utilization.months.months_1.cpu.cpu_usage',

'utilization.months.months_1.cpu.cpu_utilization',

'utilization.months.months_1.cpu.cpu_variance',

'utilization.months.months_1.cpu.utilization_percent',

'utilization.months.months_1.disk.disk_read_harmonic_mean',

'utilization.months.months_1.disk.disk_write_harmonic_mean',

'utilization.months.months_1.disk.read_iops',

'utilization.months.months_1.disk.read_ops',

'utilization.months.months_1.disk.space_utilization',

'utilization.months.months_1.disk.total_io',

'utilization.months.months_1.disk.write_iops',

'utilization.months.months_1.disk.write_ops',

'utilization.months.months_1.io_limit.percents',

'utilization.months.months_1.memory.utilization_percent',

'utilization.months.months_1.network.in',

'utilization.months.months_1.network.network_in_harmonic_mean',

'utilization.months.months_1.network.network_out_harmonic_mean',

'utilization.months.months_1.network.out',

'utilization.months.months_1.ram.freeable_memory',

'utilization.months.months_12.consumed_capacity_percents.read',

'utilization.months.months_12.consumed_capacity_percents.write',

'utilization.months.months_12.cpu.cpu_harmonic_mean',

'utilization.months.months_12.cpu.cpu_usage',

'utilization.months.months_12.cpu.cpu_utilization',

'utilization.months.months_12.cpu.cpu_variance',

'utilization.months.months_12.cpu.utilization_percent',

'utilization.months.months_12.disk.disk_read_harmonic_mean',

'utilization.months.months_12.disk.disk_write_harmonic_mean',

'utilization.months.months_12.disk.read_iops',

'utilization.months.months_12.disk.read_ops',

'utilization.months.months_12.disk.space_utilization',

'utilization.months.months_12.disk.total_io',

'utilization.months.months_12.disk.write_iops',

'utilization.months.months_12.disk.write_ops',

'utilization.months.months_12.io_limit.percents',

'utilization.months.months_12.memory.utilization_percent',

'utilization.months.months_12.network.in',

'utilization.months.months_12.network.network_in_harmonic_mean',

'utilization.months.months_12.network.network_out_harmonic_mean',

'utilization.months.months_12.network.out',

'utilization.months.months_12.ram.freeable_memory',

'utilization.months.months_3.consumed_capacity_percents.read',

'utilization.months.months_3.consumed_capacity_percents.write',

'utilization.months.months_3.cpu.cpu_harmonic_mean',

'utilization.months.months_3.cpu.cpu_usage',

'utilization.months.months_3.cpu.cpu_utilization',

'utilization.months.months_3.cpu.cpu_variance',

'utilization.months.months_3.cpu.utilization_percent',

'utilization.months.months_3.disk.disk_read_harmonic_mean',

'utilization.months.months_3.disk.disk_write_harmonic_mean',

'utilization.months.months_3.disk.read_iops',

'utilization.months.months_3.disk.read_ops',

'utilization.months.months_3.disk.space_utilization',

'utilization.months.months_3.disk.total_io',

'utilization.months.months_3.disk.write_iops',

'utilization.months.months_3.disk.write_ops',

'utilization.months.months_3.io_limit.percents',

'utilization.months.months_3.memory.utilization_percent',

'utilization.months.months_3.network.in',

'utilization.months.months_3.network.network_in_harmonic_mean',

'utilization.months.months_3.network.network_out_harmonic_mean',

'utilization.months.months_3.network.out',

'utilization.months.months_3.ram.freeable_memory',

'utilization.months.months_6.consumed_capacity_percents.read',

'utilization.months.months_6.consumed_capacity_percents.write',

'utilization.months.months_6.cpu.cpu_harmonic_mean',

'utilization.months.months_6.cpu.cpu_usage',

'utilization.months.months_6.cpu.cpu_utilization',

'utilization.months.months_6.cpu.cpu_variance',

'utilization.months.months_6.cpu.utilization_percent',

'utilization.months.months_6.disk.disk_read_harmonic_mean',

'utilization.months.months_6.disk.disk_write_harmonic_mean',

'utilization.months.months_6.disk.read_iops',

'utilization.months.months_6.disk.read_ops',

'utilization.months.months_6.disk.space_utilization',

'utilization.months.months_6.disk.total_io',

'utilization.months.months_6.disk.write_iops',

'utilization.months.months_6.disk.write_ops',

'utilization.months.months_6.io_limit.percents',

'utilization.months.months_6.memory.utilization_percent',

'utilization.months.months_6.network.in',

'utilization.months.months_6.network.network_in_harmonic_mean',

'utilization.months.months_6.network.network_out_harmonic_mean',

'utilization.months.months_6.network.out',

'utilization.months.months_6.ram.freeable_memory',

'utilization.network.in',

'utilization.network.out',

'volume_id',

'vpc_id',

'zone',

'cloudtrail.events.event_id',

'cloudtrail.events.event_name',

'cloudtrail.events.event_operation_type',

'cloudtrail.events.event_source',

'cloudtrail.events.event_time',

'cloudtrail.events.username',

'cost.daily.cost',

'cost.daily.date',

'cost.monthly.cost',

'cost.monthly.date',

'cost.operation.cost',

'cost.operation.date',

'cost.operation.item_type',

'cost.operation_dates_available.dates',

'cost.operation_dates_available.item_type',

'cost.tags.key',

'cost.tags.value',

'cost.usagetype.cost',

'cost.usagetype.date',

'cost.usagetype.item_type',

'cost.usagetype_dates_available.dates',

'cost.usagetype_dates_available.item_type',

'tags_kv.key',

'tags_kv.source',

'tags_kv.value',

'violations.Arn',

'violations.ContinuousBackupsStatus',

'violations.CreateDate',

'violations.PasswordLastUsed',

'violations.Path',

'violations.PointInTimeRecoveryStatus',

'violations.Region',

'violations.TableName',

'violations.UserId',

'violations.UserName',

'violations._id',

'violations.access_key_id',

'violations.access_key_last_used',

'violations.access_key_masked',

'violations.actual_iops',

'violations.admin_access_policies.PolicyArn',

'violations.admin_access_policies.PolicyName',

'violations.arn',

'violations.cluster_name',

'violations.compliant',

'violations.cost',

'violations.cpu_604800',

'violations.cpu_utilization',

'violations.create_date',

'violations.current_30d_cost',

'violations.details.cpu_average',

'violations.details.cpu_min_cores_required',

'violations.details.cpu_variance',

'violations.details.diskreadops_average',

'violations.details.diskwriteops_average',

'violations.details.legacy',

'violations.details.network_required',

'violations.details.networkin_average',

'violations.details.networkout_average',

'violations.details.ram_used',

'violations.disk_utilization',

'violations.errors',

'violations.estimated_saving_month',

'violations.impact',

'violations.inline_policies',

'violations.io_usage_percents',

'violations.iops',

'violations.is_config_channels_present',

'violations.is_config_recorder_running',

'violations.is_config_recorders_present',

'violations.item_id',

'violations.item_type',

'violations.last_used',

'violations.log_events_bandwidth_kbytes',

'violations.logging_enabled',

'violations.max_read_avg',

'violations.max_read_request',

'violations.max_write_avg',

'violations.max_write_request',

'violations.memory_utilization',

'violations.monthly_saving',

'violations.multi_az',

'violations.name',

'violations.network_in',

'violations.network_mbytes',

'violations.network_out',

'violations.new_type',

'violations.node_type',

'violations.not_public_read',

'violations.not_public_write',

'violations.old_type',

'violations.overlaps.region',

'violations.overlaps.vpc_cidr',

'violations.overlaps.vpc_id',

'violations.overlaps.vpc_name',

'violations.overwrite',

'violations.performance_mode',

'violations.period',

'violations.policies.PolicyArn',

'violations.policies.PolicyName',

'violations.ports',

'violations.possible_30d_cost',

'violations.project',

'violations.project_id',

'violations.read_capacity_units',

'violations.read_consumed_percentage',

'violations.read_iops',

'violations.read_iops_utilization',

'violations.read_usage_percents',

'violations.reason',

'violations.recommendations.current_30d_cost',

'violations.recommendations.details.early_delete_days',

'violations.recommendations.details.requests_count',

'violations.recommendations.details.storage_gb',

'violations.recommendations.has_early_delete_fee',

'violations.recommendations.new_type',

'violations.recommendations.possible_30d_cost',

'violations.recommendations.update_monthly_change',

'violations.region',

'violations.requests_count',

'violations.resource_id',

'violations.server_side_encryption_enabled',

'violations.size',

'violations.status',

'violations.subnet_ids',

'violations.tab_size_mbytes',

'violations.table_name',

'violations.table_size_bytes',

'violations.tags.CostUnit',

'violations.tags.Name',

'violations.tags.Owner',

'violations.tags.Purpose',

'violations.tags.User',

'violations.tags.aws:autoscaling:groupName',

'violations.tags.aws:cloudformation:logical-id',

'violations.tags.aws:cloudformation:stack-id',

'violations.tags.aws:cloudformation:stack-name',

'violations.tags.aws:ec2launchtemplate:id',

'violations.tags.aws:ec2launchtemplate:version',

'violations.tags.eks:cluster-name',

'violations.tags.eks:nodegroup-name',

'violations.tags.k8s.io/cluster-autoscaler/enabled',

'violations.tags.k8s.io/cluster-autoscaler/nops-test-eks',

'violations.tags.kubernetes.io/cluster/nops-test-eks',

'violations.tags.nOps',

'violations.tags.owner',

'violations.throughput',

'violations.timestamp',

'violations.total_io',

'violations.type',

'violations.update_monthly_change',

'violations.use_days',

'violations.use_weeks',

'violations.user_id',

'violations.user_name',

'violations.versioning_enabled',

'violations.violation_date',

'violations.violation_subtype',

'violations.violation_type',

'violations.volume_id',

'violations.vpc_cidr',

'violations.vpc_id',

'violations.vpc_name',

'violations.write_capacity_units',

'violations.write_consumed_percentage',

'violations.write_iops',

'violations.write_iops_utilization',

'violations.write_usage_percents',

'violations_dates_available.dates',

'violations_dates_available.first_seen',

'violations_dates_available.violation_subtype',

'violations_dates_available.violation_type',

'violations_history.Arn',

'violations_history.ContinuousBackupsStatus',

'violations_history.CreateDate',

'violations_history.PasswordLastUsed',

'violations_history.Path',

'violations_history.PointInTimeRecoveryStatus',

'violations_history.Region',

'violations_history.TableName',

'violations_history.UserId',

'violations_history.UserName',

'violations_history._id',

'violations_history.access_key_id',

'violations_history.access_key_last_used',

'violations_history.access_key_masked',

'violations_history.admin_access_policies.PolicyArn',

'violations_history.admin_access_policies.PolicyName',

'violations_history.arn',

'violations_history.cluster_name',

'violations_history.compliant',

'violations_history.cost',

'violations_history.cpu_604800',

'violations_history.cpu_utilization',

'violations_history.create_date',

'violations_history.current_30d_cost',

'violations_history.details.cpu_average',

'violations_history.details.cpu_min_cores_required',

'violations_history.details.cpu_variance',

'violations_history.details.diskreadops_average',

'violations_history.details.diskwriteops_average',

'violations_history.details.early_delete_days',

'violations_history.details.legacy',

'violations_history.details.network_required',

'violations_history.details.networkin_average',

'violations_history.details.networkout_average',

'violations_history.details.ram_used',

'violations_history.details.requests_count',

'violations_history.details.storage_gb',

'violations_history.details.warning_message',

'violations_history.disk_utilization',

'violations_history.errors',

'violations_history.has_early_delete_fee',

'violations_history.id',

'violations_history.impact',

'violations_history.inline_policies',

'violations_history.io_usage_percents',

'violations_history.item_id',

'violations_history.item_type',

'violations_history.last_used',

'violations_history.log_events_bandwidth_kbytes',

'violations_history.logging_enabled',

'violations_history.max_read_avg',

'violations_history.max_read_request',

'violations_history.max_write_avg',

'violations_history.max_write_request',

'violations_history.memory_utilization',

'violations_history.monthly_saving',

'violations_history.multi_az',

'violations_history.name',

'violations_history.network_in',

'violations_history.network_mbytes',

'violations_history.network_out',

'violations_history.new_type',

'violations_history.node_type',

'violations_history.not_public_read',

'violations_history.not_public_write',

'violations_history.old_type',

'violations_history.overlaps.region',

'violations_history.overlaps.vpc_cidr',

'violations_history.overlaps.vpc_id',

'violations_history.overlaps.vpc_name',

'violations_history.overwrite',

'violations_history.performance_mode',

'violations_history.period',

'violations_history.policies.PolicyArn',

'violations_history.policies.PolicyName',

'violations_history.ports',

'violations_history.possible_30d_cost',

'violations_history.project',

'violations_history.read_capacity_units',

'violations_history.read_consumed_percentage',

'violations_history.read_iops',

'violations_history.read_iops_utilization',

'violations_history.read_usage_percents',

'violations_history.reason',

'violations_history.recommendations.current_30d_cost',

'violations_history.recommendations.details.early_delete_days',

'violations_history.recommendations.details.requests_count',

'violations_history.recommendations.details.storage_gb',

'violations_history.recommendations.has_early_delete_fee',

'violations_history.recommendations.new_type',

'violations_history.recommendations.possible_30d_cost',

'violations_history.recommendations.update_monthly_change',

'violations_history.region',

'violations_history.requests_count',

'violations_history.server_side_encryption_enabled',

'violations_history.status',

'violations_history.subnet_ids',

'violations_history.tab_size_mbytes',

'violations_history.table_name',

'violations_history.table_size_bytes',

'violations_history.tags.ChangeVersion1',

'violations_history.tags.CostUnit',

'violations_history.tags.JT',

'violations_history.tags.Name',

'violations_history.tags.Owner',

'violations_history.tags.Purpose',

'violations_history.tags.User',

'violations_history.tags.aws:autoscaling:groupName',

'violations_history.tags.aws:cloudformation:logical-id',

'violations_history.tags.aws:cloudformation:stack-id',

'violations_history.tags.aws:cloudformation:stack-name',

'violations_history.tags.aws:ec2launchtemplate:id',

'violations_history.tags.aws:ec2launchtemplate:version',

'violations_history.tags.aws:ec2spot:fleet-request-id',

'violations_history.tags.eks:cluster-name',

'violations_history.tags.eks:nodegroup-name',

'violations_history.tags.k8s.io/cluster-autoscaler/enabled',

'violations_history.tags.k8s.io/cluster-autoscaler/nops-test-eks',

'violations_history.tags.kubernetes.io/cluster/nops-test-eks',

'violations_history.tags.nOps',

'violations_history.tags.ownder',

'violations_history.tags.owner',

'violations_history.timestamp',

'violations_history.total_io',

'violations_history.type',

'violations_history.update_monthly_change',

'violations_history.user_id',

'violations_history.user_name',

'violations_history.versioning_enabled',

'violations_history.violation',

'violations_history.violation_date',

'violations_history.violation_subtype',

'violations_history.violation_type',

'violations_history.volume_id',

'violations_history.vpc_cidr',

'violations_history.vpc_id',

'violations_history.vpc_name',

'violations_history.write_capacity_units',

'violations_history.write_consumed_percentage',

'violations_history.write_iops',

'violations_history.write_iops_utilization',

'violations_history.write_usage_percents'

Cost#

Commitment Management - Working with Reserved Instances#

Managing and viewing AWS Reserved Instances via Commitment Management Dashboard

nOps enables you to view usage of AWS Reserved Instances on a near-real-time basis. Reserved instances provide significant cost savings compared to on-demand billing, and instances and usage can be monitored through the nOps Commitment Management dashboard.

This article explains:

  1. Why Use Reserved Instances.
  2. Accessing the Reserved Instance Pages via the Commitment Management Dashboard
  3. Reserved Instance Planning
  4. Reserved Instance Coverage
  5. Saving Plans Recommendations
  6. Saving Plan Utilization
  7. Troubleshooting

Why Use Reserved Instances

AWS Reserved Instances can save you significant costs compared with on-demand instance use. Standard reserved instances that you purchase must match certain attributes of your running instances in order to achieve the savings, such as instance type and region. Convertible reserved instances provide more flexibility, and scheduled reserved instances ensure capacity for specific time periods.

See the AWS Types of Reserved Instances documentation for a complete description of each.

nOps’ Commitment Management pages (Reserved Instances Planning, Reserved Instance Coverage) provide insight into your current utilization of reserved instances that you’ve purchased and provide planning insights to get even more cost savings from this important AWS feature.

Accessing the Reserved Instance Pages via the Commitment Management Dashboard

To access Reserved Instances management:

  1. Log into nOps.
  2. From the Dashboard, click Cost to open the drop-down menu, and choose Commitment Management:
  3. Note that the in the Commitment Management dashboard the following tabs are for Reserved Instances:
    • Reserved Instance Planning
    • Reserved Instance Coverage

Reserved Instance Planning

Use the Filters in the left pane to see reserved-instance recommendations, and historical usage, by AWS account, instance type, and other criteria. Using filters to subset the recommendations can help target how your costs could improve for various reserved-instance recommendations.

Recommended Instance Reservations

  • Based on your current instance use, recommends how many reserved instances you should have to cover that use, and shows what the savings would be.
  • Note that the instance use is per combination of instance type, AWS region, and OS – the three left-hand columns.
  • If the number of records exceeds the table maximum, click the blue Download button to get a CSV that includes the full list of recommendations:

Historical Usage

  • The historical usage table lists – by instance type, availability zone, and OS – your instance usage over the past five months, to aid your RI planning.

Reserved Instance Coverage

To use the Reserved Instance Coverage tab, you must first enable it.

Enabling the Reserved Instance Coverage Feature

This tab to manage Reserved Instance Usage is only available for Client Member users who subscribe to this feature. It is not available for Partners, or for Partner Clients. Only Client Members can subscribe to this feature and configure their environment to enable the Reserved Instance Coverage tab.

This feature currently only shows coverage for EC2 instances.

To use this dashboard, you must configure your AWS environment using an nOps-authored CloudFormation stack from the nops-aws-forwarder project in the nOps GitHub repository. The project contains a ReadMe that describes requirements and installation, and includes a button that launches the CloudFormation stack.

As noted in the ReadMe, you must configure AWS CloudTrail with an S3 bucket for CloudTrail logs before deploying the CloudFormation stack. A CloudTrail events log in your AWS account provides nOps the information needed to calculate RI utilization and EC2 instances. Note that the S3 bucket for AWS CloudTrail and the nOps-aws-forwarder should be within the same region.

The Reserved Instance Coverage Tab

The boxes at the top of this page summarize:

  • Running Normalized Units
  • Reserved Normalized Units
  • Running Instance Coverage

Running Normalized Units and Reserved Normalized Units are explained by the AWS documentation for the various instance families.

Running Instance Coverage is the percentage of running instances that are covered by a reserved instance – 100% means all normalized units within a given size are covered, and 0% means that none of the normalized units within a given size are covered.

Below the boxes, the Reserved Instances list gives RI coverage for AWS accounts broken out by region, tenancy, platform, and family.

Note: The column headings on this page include sorting by clicking on the column heading, plus filtering and column-choice options that appear and can be chosen when you hover over any column heading:

When you hover over a column heading, then click the symbol that appears, options appear for column sizing:

Note that filtering and column-choice options are also in the heading of the drop-down – here’s the filtering option used to show reserved-instance coverage for a specific region:

The column-choice options let you include in the table only the columns of use to you:

Symbols in the column headings show the sorting and filtering that are in effect:

Note also the arrow → in the Action column, which you can click to view the details page for that account group. More than one account can be included in a group.

The filtering and sorting options you’ve chosen on the main Coverage page persist on that page if you navigate to a details page then back again, and the same options are available on the details page (though the details sorting and filtering resets when you leave that page).

Note also the Refresh data button at the upper right:

Reserved Instance Coverage Details

In the details page, accessed by clicking the arrow in the Action column, boxes at the top summarize account, region, OS (platform), etc. of the line you’ve chosen:

The Usage Summary graph shows running and reserved instances in normalized units, over the last 12 hours – though you can change the time period by pulling down the blue button at the right:

The delta between the Reserved and Running lines in the graph help you understand how much under-provisioned or over-provisioned you are in reserved instances over time.

Important: You can set a webhook to inform you if you are running a deficit or a surplus on Reserved Instance coverage. See the Webhooks topic for more information.

Note the two tabs below the chart that provide tabulated details for reserved and running instances:

And, in the tables under those tabs, the column heads provide the same sorting, filtering, and column-choice abilities as described above for the Coverage page itself. Click on any column heading to sort (repeated clicks change sort order), or hover over any column heading and click the icon that appears to filter, choose columns, and adjust column width.

Use the Go Back button at the upper left to return to the Reserved Instance Coverage page:

Savings Plans Recommendations

In the Savings Plans Recommendations tab, you can tweak the filter properties to calculate the savings you can get on Reserved instances if you buy Savings Plans:

In the Filters section, you can select the desired:

  • AWS Account
  • Savings Plans Type
    • Compute
    • EC2 Instance
    • SageMaker
  • Savings Plans Term
    • 1 Year
    • 3 Year
  • Payment Option
    • No Upfront
    • Partial Upfront
    • All Upfront
  • Look-Back Period in Days
    • 7 Days
    • 30 Days
    • 60 Days

Once you set the filter, in the Recommendations section, you can see the:

  • Upfront Cost on the Savings Plan
  • Hourly Commitment To Purchase
  • Estimated Utilization of the plan based on your current usage
  • Estimated Monthly Savings:

You will also get the look-back analysis, you can adjust the look-back period in the Filters section. In the Look-Back Analysis section, you can see:

  • Look-Back on Demand
  • Estimated Savings Plan
  • Estimated New On Demand
  • Estimated Savings
  • Estimated Return on Investment (ROI)
  • Minimum Hourly Charges
  • Maximum Hourly Charges
  • Average Hourly Charges

All the Savings Plans and the details that you see on this page come directly from AWS. When you set a filter, nOps sends your selections to AWS and simply shows the response allowing you an easy way to find the Savings Plans according to your requirements.

Savings Plans Utilization

If you have purchased Savings Plans, you can see well you are utilizing the Savings Plans.

In the Savings Plan Utilization tab, you can select the AWS account in the Project drop-down list and use the Calendar field to filter the timeframe for which you want to check the utilization of your Savings Plans:

In the Savings Plan Utilization tab, you can see the:

  • Net Savings For Selected Period
  • Savings Plans ARN
  • Hourly Commitment
  • Total Commitment (in hours)
  • Used Commitment
  • Unused Commitment
  • Utilization in percentage
  • Net Savings
  • On Demand Cost Equivalent

You can also click on the “>” icon to see the detailed breakdown of each of your savings plan. When you click the “>” icon, you will see:

  • Account ID
  • Start Date
  • End Date
  • Savings Plan Type
  • Instance Family
  • Region
  • Payment Option
  • Term
  • Amortized Monthly Recurring
  • Amortized Upfront
  • Amortized Total

Troubleshooting for Reserved Instance Usage

Q: Why don’t I see any data on the page?

A: There are few possibilities for why you may not see any data on this page.

  • You may have opted not to enable this feature, or you may not be using any reserved instances.
  • You may not have configured CloudTrail or the nOps Forwarder that provides the data to nOps. See Enabling the Reserved Instance Coverage Feature above.
  • If you have enabled this feature and done the configuration, it may take up to 10 hours to receive data about your instances. Contact nOps Customer Support if you do not begin to see the data after that time.
  • This tab is not available for Partner Users or Partner Client Users

Cloud Resources Cost - Stay on Top of Cost Changes#

Choosing Cloud Resources Cost from the Cost Control pull-down gives you a set of tabs that show your cloud spend broken out by:

  • Cloud Accounts – spend by account in AWS, Azure, etc.
  • Regions – spend by region in cloud vendor
  • Cloud Services – spend by service in each cloud vendor
  • Resources – spend from cloud resources being used
  • Non-resources – spend not specifically from cloud resources
  • Usage Types – spend by AWS usage type
  • Operations – spend by AWS operation
  • Tags – spend by tagged resource, with tagged resources grouped under key name
  • Change Management – log of daily cost changes

The first seven of these tabs(Cloud Accounts, Regions, Cloud Services, Resources, Non-resources, Usage Types, and Operations) each give you:

  • A Spend Summary at the top :
    • Yesterday’s Total Spend
    • Last Week’s Spend
    • Month to Date Spend
    • AWS Credits (applicable tabs only)
  • An array of Filters in the vertical box at the left, enabling you to view cost trends over time, and to understand which resources contribute to higher costs.
  • A History bar chart just below the Spend Summary. Note the two buttons above the right side of the chart:
    • See Spend Forecast
    • See Spend History
  • Details of Daily Spend listed in a table below the History chart

Spend Summaries

A few notes:

  • The periods for the Spend Summary boxes Yesterday, Last Week, and Month to Date are as defined by AWS.
  • If you choose a specific date range in the Filters box at the left, the Spend Summary becomes just the total spend for that date range (no yesterday’s spend, last week’s, or month to date).
  • On the Resources and Non-resources tabs, the Spend Summary gives the appropriate subtotals for resources or non-resources. For the other five tabs, the Spend Summary gives the same overall totals. (The History chart and Daily Spend table below the Spend Summary do give details broken out per the specific tab title.)
  • Credits: The overall credits value in the spend summary is as reported by AWS, and includes both AWS credits and refunds. The number in the Credits box of the Spend Summary is for the specific date range in the Filters box at the left. The credits value in the summary when you hover your cursor over a “circle-i” is the portion of the AWS credits plus refunds, reported by AWS, for yesterday, last week, or month to date, depending on the box.
  • Hover text summary: Hovering your cursor over the “circle-i” in any of the three Spend Summary boxes gives the Resources, Non-resources, and AWS Credits value for the time period of that box (yesterday, last week, month to date). These numbers and the associated time periods are as defined in AWS.

Filters for Analyzing Costs

The Filters vertical box at the left allows you to zero in on a combination of resources and parameters and so find the sources of cost trends over time, or understand which resources contribute most to higher costs. See How to use nOps Search for information on even more sophisticated search possibilities.

Filtering criteria you’ve chosen are shown by ovals above the Spend Summary, so that you can easily track the filtering criteria that you have in effect — however, note that any custom date range is not represented by an oval above the Spend Summary.

The Filters box includes a calendar tool, which allows you to specify a date range for which costs are to be displayed and summed. When specifying a custom date range:

  • Use the Apply button at the very bottom of the enlarged calendar tool, to make your date range take effect
  • Note the “easy-button” pre-specified date ranges, like 1W, 3M,or May 2022, just above the Apply button.

The default cost display might only cover the preceding month. The calendar tool can be used to view cost changes over time periods that are meaningful to your business.

Spend figures next to regions, cloud-managed services, and other criteria listed in the Filters box are the total for that filter criterion and will match the total in the Daily Spend table on the page.

History Chart

The color-coded choices beneath the chart can be clicked to exclude each from the chart, enabling you to focus on particular regions, services, resources, etc.

Daily Spend

  • The Total column in the table is the total for the date range set in the Filters box on the left.
  • The table of daily values scrolls left to right through the days (scroll bar at bottom) and is paginated 10 records per page (page selection at lower right).
  • The Daily Spend detail on each page has a Download CSV button at the upper right that downloads all spend records for that page. For example, if a tab such as Operations has 170 records, the default view may display only 10 records at a time. However, clicking the download icon will download all records (in this example, 170) in a .csv file:

Non-resources

This tab gives costs not associated directly with a resource, such as taxes and AWS or Azure services (for example, AWS Key Management Service, or AWS CloudTrail). To see a complete list of your non-resource costs for the selected filter criteria (including selected date range in the Filters box calendar tool), click Download CSV at the top of the Daily Spend table below the History bar chart.

Cost Details for Resources and Non-resources

You can get details for any Daily Spend value or Total value in the Resources or Non-resources pages by clicking on the value:

Tags

The Tags tabgives the spend for all tags by key name. Note that:

  • To see the spend for each value under a key name, expand a key name by clicking on it, or by clicking on > at the right end of the row.
  • To see detailed costs for tag key-value pairs, use the Tag Explorer page under Cost Control (Cost Control pull-down, choose Tag Explorer).
  • A blue Download button (upper right) gives you a CSV file of the spend figures by key name, for all keys. The CSV does not break out by key value.
  • The Cost (total) column gives the total spend for the time period specified in the Filters column at the left — default, 1 month to date.

Change Management

The Change Management tab lists the changes in your account and the potential billing impact of these changes. You can expand or collapse each day’s list of changes by clicking the down arrow at the right, and you can click on the Daily, Weekly, or Monthly filter options to see the total spend change over daily, weekly, or monthly intervals for each resource type. Change the Sort by options to view increases or decreases by cost or by percent.

Finding Cost Changes

Use one or a combination of the following methods to explore cost changes, compare what you paid this month vs last month, or investigate a spike in costs.

  • Check costs on the dashboard. From the Dashboard, navigate to Cost Control / Cloud Resources Cost. Check the Cloud Accounts, Cloud Services, Resources, and Operations spends. Spend numbers flagged in red show are higher than in the previous time period. Check for variances. You might notice (for example) that unexpectedly higher costs stem from adding a number of new resources.
  • Use the calendar tool. For example, in the Resources tab, if the spend and usage look normal and there are no spikes, use the calendar tool to include costs for the prior month. This may show changes in resources. Also examine Non-resources tab to see how such items as support costs, usage costs for pausing and restarting an instance, egress costs, or tax costs are contributing to your spend.
  • Also in the Resources tab, use the various filters in the box at left – for example, to isolate key resources by tags, region, account, usage type, etc., so that you can evaluate a specific resource or a set of resources for changes in cost over time.
  • From the Regions tab, click on See Spend Forecast to see estimates based on your current spend.
  • Set up Notifications. From the Change Management tab click on Subscribe to be notified of cost changes. You can select a period of time, and a range for the cost comparison (by week or by month). Enter emails for people who will receive these notifications.

Resource Rightsizing - Tune Down Over Resourced EC2 Instances#

How to use Resource Rightsizing

How to use Resource Rightsizing

Rightsizing is one of the best ways to bring cloud costs under control. To do this you must continuously analyze instance performance, usage patterns and needs. After that, turn off idle instances and rightsize any instance that is either poorly matched to the workloads or over-provisioned.

Rightsizing is an ongoing process since resource needs are constantly changing. To achieve cost optimization make rightsizing a regular part of your cloud management process. nOps simplifies both resource analysis and monitoring.

Review Amazon Cloudwatch metrics to identify usage patterns and needs that enable you to take advantage of rightsizing opportunities:

  • Steady State: In the steady state, the load remains at a constant level for some time. It is even possible to forecast the compute load at any one time. For this type of usage pattern, consider Reserved Instances. They can yield significant savings.
  • Variable and predictable: For such instances, the load varies over time but on a predictable schedule. AWS Auto Scaling is ideal for applications that exhibit stable demand patterns weekly, daily, or on hourly usage variability. You can use AWS Auto Scaling to scale EC2 capacity whenever there is a spike or fluctuation in traffic.
  • Dev/test/production: Turn off production, testing, and development environments in the evening since organizations usually use them during business hours.
  • Temporary: Do you have temporary workloads with flexible starting times that you can interrupt? Avoid using an on-demand instance. Instead, place a bid on an Amazon EC2 Spot Instance.

Click on Cost Control:

Select Resource Rightsizing:

Use the tabs at the top to switch between EC2, RDS, and S3:

Use the Filters section to look through specific information on:

  • AWS Account
  • Instance Types
  • CloudFormation: StackName
  • Autoscaling Group
  • Regions
  • Tags

Current Config and Suggested Config columns use data over the past 2 weeks to make the suggestion of downsizing:

Click on Resource Details to look at Resource Details, Cost History, and Configuration History:

How does nOps Rightsizing Algorithm Work?

With CloudWatch Enabled

nOps collects 6 key metrics for every CloudWatch enabled EC2 instance in your environment:

  • NetworkIn
  • NetworkOut
  • DiskReadOps
  • DiskWriteOps
  • CPUUtilization
  • mem_used_percent

For each instance in your environment, we make the following calculations:

  • Network average
  • Harmonic mean of disk read and write
  • Disk read and write averages
  • Average network in / out utilization to six points of precision
  • Average memory utilization
  • Average CPU utilization

We continuously monitor a 30 day sample of your utilization data and match your CPU requirements to the latest offerings in the AWS pricing catalog to select the best match for your resource requirements.

Without CloudWatch Enabled:

nOps will recommend the latest offering upgrades for your given instance class, when they are available.

Spot Advisor - Switch to Spot Instances#

Identify EC2 instances and view details on the Spot instances

How to View nOps Spot Advisor

The spot advisor helps you to find EC2 instances that can be migrated to use Spot instances. This guide will show how to navigate to the Spot Advisor to see the instances that can be converted to Spot Instance pricing.

On the dashboard, go to menu item Cost Control and click to pop-out the drop-down menu item. Navigate to the Spot Advisor menu item and click it.

This will lead you to the Spot Advisor Dashboard which shows the list of instances and the cost estimates for the Spot option of those instances.

There is an option to view more detail on the Spot Instance. Click on the Instance name.

This will pop-up a new screen, which will give detailed information on the EC2 Usage for the Spot Instance. There is options to view Resource Details, Cost History, and Config History.

Tag Explorer - Manage Tag Competency#

How to use Tag Explorer

Tag Explorer

The Tag Explorer feature allows you to assess tags you attached to resources for billing information to better organize your costs. Such as costs for different stacks, customers, environments, projects, departments, or teams. Displaying the current information will highlight untagged resources. Following are some terms and what they mean:

Tag Key: is top level of the tag.

Tag Value: you can assign more than one tag value to a tag key. For example:

  • Application: Production
  • Application: Testing

Use Case: Use tags to understand the cost of all the resources used to run an app. You can report on Tag Keys and Tag values. It’s an easier way to group resources to view their corresponding costs

This feature is available from a Client Page.

On the NavBar click Cost Explorer > Tag Explorer

Filter by any of the following:

  • Date
  • Custom Rules
  • Resources Launched after
  • Resources Launched before
  • AWS Account
  • CloudFormation: Stack-Name
  • Regions
  • AWS Manages Resources
  • Operations
  • Usage Type

When hovering over the row for Tags used on the left-hand side it will breakdown the cost of each AWS managed services by color. Depending on the color of the graph line you hover over it will bring the cost of the AWS managed service to the top of the graph key.

In the All tags list, you can select a tag to view details. To do this click the arrow in the ACTION column.

The following page displays a breakdown of the Tag name. From this page you can see: Service, Resource name, Project, Region, and Total Cost.

Clicking the arrow in the ACTION column, for details about the selected Service .

A pop-up dialog provides details on when the Service was created and last used.

  • Select Add a Jira Ticket to connect to Jira to create a ticket.
  • Click View Resource on AWS Console to navigate directly to the resource in your AWS Account.

Did this answer your question?

Rules#

View IAM Violations#

View IAM security principles that show any violations

How to view IAM violations

IAM violations show you IAM security principles that your account does not comply with. For this to be possible, you must first set up your AWS account(s) on nOps. These violations could range from accounts not using MFA, accounts not granted least privilege To view IAM role violations take the following steps

Go to the nOps Rules tab.

nOps Rules page will be launched with various tabs showing the different options. Such as Security, Cost, Reliability, Operations, Performance, and Change Management

On the left side-bar, there is a section called Filters. Under Filters, there is a search bar. In that search bar type the word IAM. and press the enter key to search

This will show a list of IAM violations on the right-hand side. From the screenshot we can see at least 4 sections of violations; 266 AWS IAM roles aren’t attached to any resource, 36 users have not been granted least privilege permissions in AWS IAM, 11 active root account access key(s) detected, 6 AWS IAM users aren’t using MFA-enabled sign in

View Under-Utilized EBS Volumes#

Fix under utilized EBS volumes from the nOps Rules page

How to View Under-Utilized EBS Volume

nOps has the feature to view underutilized EBS volume on the nOps Rules page. Using the Costs section will display underutilized resources that can then be fixed in the AWS account.

Navigate to the menu bar, and click on the nOps Rules menu item.

This will lead to the nOps Rules page

On the nOps Rules dashboard. There are series of tabs with labels Security, Cost, Reliability, Operations, Performance and Change Management. Click on the Cost tab

This will show a list of items that can be re-configured to save your overall AWS cost. From the list, identify the item unused Amazon EBS volumes detected

View Under-Utilized Network Resources#

Manage under utilized resources and address the resource by overriding the violation or fix through the AWS account

How to view Under-Utilized Network Resources

nOps has the feature to view underutilized network resources on the nOps Rules page. Using the Costs section will display under utilized resources that can then be fixed in the AWS account.

Tip: While looking at the Underutilized resources, nOps does not recognize credits that are applied to the account. (Screenshot provided from Dashboard)

See example below:

How to find Underutilized Resources

Navigate to the menu bar, and click on the nOps Rules menu.

This will lead to the nOps Rules dashboard.

On the nOps Rules dashboard.There are series of tabs with labels Security, Cost, Reliability, Operations, Performance and Change Management. Click on the Cost tab

This will show a list of items. This list shows items that can be re-configured to save your overall AWS cost. Under-utilized network resources range from unused EIP, unused NAT resources

Under the Rule Name there is an option to view the underutilized resource in detail. Click the arrow to view the resource in detail.

On the Unused Resource Details will list in detail the resources. By clicking the 3 dots on the right-hand side of the resource detail, it will list actions to resolve this issue.

Reports#

How to Create a Custom Template - Partners#

Custom Templates are used to create SOW’s, review, and more.

How to Create a Custom Template – Partners

Using the Custom Template feature is a tool to customize a report. Saving the Report Template can then later be used for a Custom Report for the company or clients.

Click on Custom Templates on Dashboard

Or, click on Reports>Custom Templates

On the Custom Template page, click on Create Custom Template

A pop up will appear. Add a name for the report, and click New Template.

*There is an option to edit all the existing templates., by clicking on one of the created custom templates it will take you to a screen to edit the existing version

On the right-hand side there are options to add to the report by clicking on the Library options it give the option to select which block to add to the report.

*Some of the items you can click on and it will add the block to the bottom of the template. Others will need to clicked and dragged.

Within the built template the you can move the blocks up and down. Or, delete blocks that were added, that you no longer need.

Heading – Create a Title for the Report

Paragraph – Add text to the report (Explanation, Purpose…)

Table of Contents – Will add a table of content to the template to explain which blocks will be used in template

Remediation – Select a plan to remediate

Recommendation – Select any recommendations for improvement

Table – Add a Pricing Table

Cost Summary – AWS spend from yesterday, last week, and month to date

All Rules – Select rules from Security, Cost, Reliability, Operations, Performance

Resources – Resources from AWS account

High Risk Issues – Issues that are considered HRI

Example of how the Template will appear:

How to Create and View Custom Reports#

How to Create and View Custom Reports (MSP)

If you are not logged into your Partner Dashboard already, follow these steps:

Navigate to the top right corner of the screen where you have the name of the user that is currently logged in. Click on the name to display the drop-down menu. On the menu that shows, navigate to the Partner Dashboard menu item and click.

This will take us to the Partner Dashboard page.

Navigate to the top menu bar and click on the Templates menu item. On the drop-down that pops-out, click on the Custom Reports menu item

This will lead to the list of reports you can create or view.

Creating a New Report

Note Reports are created and managed by the Administrator. You can only view the Custom Reports

Select which template to use. A pop-up will ask for a name and for which client.

Once on the Custom Report, there are options to select what items to add to the report for a client. There are options to drag and drop in the Library and add comments to the report

By clicking the Share button the ability to share the report through email or copying the URL to the report.

Security#

Check if the Root Account has MFA Activated#

How to Check if the Root Account MFA status

nOps has the option of checking security across your AWS account that it is connected with. One crucial part of the AWS account is the root account. The maximum-security has to be applied to it, else any vulnerability or risk found and exploited can be devastating to the whole AWS account. Let us show you how to check if MFA is activated for your AWS account

Click on the Security Dashboard menu item, which is one of the top menu items.

This will lead to the Security Compliance Dashboard. This dashboard displays different rules and violations. On the page IAM Role Permission and Account Usage. Scroll to the item called AWS accounts doesn’t have root account MFA.

This shows if the root user has MFA enabled or not. Clicking on the arrow on this row, will display who is not in compliance

View Security Violations#

View all security violations for Network Layer, S3, IAM and more

How to view all Security Violations

There are lots of security violations that are available across an AWS account from Network Layer to S3 to IAM and others. This guide shows how to view all security violations in your AWS account.

Go to the Security Dashboard menu item.

This view will give a Summary of all security violations on your AWS account which can be filtered with these options; Custom Rules, AWS Account, Cloudformation Stack Name, Tags, Regions, and VPC

Workload#

Create Workloads#

Describes how to create a workload

How to Create a Workload

Workloads in nOps help you to group different AWS resources that have been created within your AWS environment. This makes it easier to carry out various assessments on those specific services that make up with Workload. These are the steps to create a workload in nOps.

Access Workloads by clicking Workloads from the tabs.

Creating a workload is as easy as selecting the Workloads tab and clicking the Add New Workload button on the top right. Once a workload is created you can edit it to add or remove resources. Note that you cannot rename a workload, but you can always create a new one.

What to know before you begin

nOps offers the following review types and compliance frameworks

Review TypesWell Architected Framework Review (WAFR) Foundational Technical Review (FTR)
Software as a Service (SaaS) Serverless.
Compliance FrameworksHIPAA SOC2CIS

Creating Your First Workload

If this is the first time you have created a workload, you will click “Create new Workload” in the middle of the screen. After that, the Create new Workload button will move to the top right of the window.

To Create a new Workload

  1. Click the Create New Workload button to create a new workload. This opens a dialog with a form to fill for creating the Workload
  2. Enter the following information:.
    • Workload Name – This is an unique identifier for your workload.
    • Select the Review types you want to use to asses this Workload. Select Well Architected for AWS Cloud accounts
    • Select the Compliance Frameworks for this workload from the list. This setting is optional.
  3. If you are using an AWS account select the toggle to change the choices and to allow nOps to find your Resources and save the WAFR compliance progress. Select an AWS Account and an Environment type and enter a Description for the Workload .
  4. Specify the Workload resources Enter all Regions that nOps will pull resources from. This defaults to All.
    The AWS Managed Services that nOps will include in your workload. This defaults to All Managed Services and
    Enter VPCs that contain the resources that nOps will include in your workload.
  5. Select tags to be assigned to the resources you want to include, e.g., “ApplicationA.” You can add as many tags as you need.
  6. Add notes for the workload and click Save to save and create this workload. And that’s it!

It may take a few minutes for the Workload to appear. The workload is displayed on the Workloads Dashboard.

Click on the workload to view the details.

Create Workloads - MSP#

How to Create Workloads (MSP)

If you are not already logged into the Partner Dashboard follow these steps:

Navigate to the top right corner of the screen where you have the name of the user that is currently logged in. Click on the name to display the drop-down menu. On the menu that shows, navigate to the Partner Dashboard menu item and click.

This will take us to the Partners Dashboard page.

  1. On the Partners Dashboard, there is a menu item at the top with the title Workloads.

Click on the Workloads link. This will take you to the Workloads page.

On the Workloads page, click on the Create New Workload on the top-right corner of the screen.

A form will pop-up on the side, fill out the form with all the required information.

Workload Name – This is the unique identifier for your workload.

  • AWS Account(s) – The AWS Account(s) where the resources for your workload live.
  • Workload Type – Defines the overall workload type. Please select “Well-Architected.”
  • Lens – nOps supports the AWS lens concept. Please select FTR for the lens type.
  • Environment – This defaults to Production and defines the environment from an AWS perspective.
  • Jira project – If you are using the built-in Jira integration, you will be able to select a Jira project to integrate with.
  • Description – A text description of your workload.

After you have filled out the metadata for your workload, you can click the gray bar that says, “Specify Workload Resource,” causing the query builder to slide into view. nOps allows you to specify rules that define which resources will be added to the workload.

  • Regions – The regions that nOps will pull resources from. This defaults to All.
  • AWS Managed Services – The AWS services that nOps will include in your workload. This defaults to All.
  • VPC – The VPCs that contain the resources that nOps will include in your workload. This defaults to All.
  • Tags – Select tags to be assigned to the resources you want to include, e.g., “ApplicationA.”

Click “Save” to create your workload.

Add New Resources to Workloads#

Attach new resources to an existing or new WAFR/Workload

How to Add New Resources in Workloads

You can add new resources to an already existing workload that has been created, this guide helps to make that possible.

  1. Login to your nOps account
  2. Click on the Workloads Tab
  3. Choose an existing Workload and click the Edit button on the right.
  4. At the Modify Workload dialog, click Specify Workload Resource and edit or change the resources by using the options in the drop-down.
  5. Add any additional tags then click Save to add the resources to the workload.

See Also: How to create Workloads, and Viewing and Managing Workloads

View and Manage Workloads#

View and Manage Workloads using the dashboard

Workloads are a collection of resources that include your compute resources and comprise a basic unit. A workload should contain all of the resources (EBS or EC2 volumes, S3 buckets etc.) and specify all of the Regions where these resources live.

This article contains the following information:

Why create a workload?

Viewing a Workload using the Workload Summary tab

Compliance Ops

Why create a workload?

Creating a workload lets you define competencies, lenses, and frameworks for a workload environment in order to get contextual recommendations, targeted assessments and detailed reports.

If you are new to nOps you may find that a Workload has already been created for you. The workload contains all of the regions where you currently have compute resources for the accounts you have shared with nOps.

Recommendation: nOps recommends that you create a Production workload that contains all of the compute and other resources used for your Production environment. This workload should be assessed periodically using the AWS Well-Architected review.

You can group different AWS resources in a workload to assess specific services. Create workloads to monitor and report on different parts of your Cloud Architecture. For example, a Production Workload that is comprised of all Production computing resources and regions, or a Dev-ops workload that contains only dev-ops computing resources and regions.

Click the link for information about How to Create a Workload

Viewing a Workload using the Workload Summary tab

From the Workloads tab, click on a workload to view detailed information.

The Workload Summary displays the information using charts.

Click the arrow on a chart to see details about that Resource type or click See All at the bottom of each chart.

Click an item in the key as seen below to remove it from the chart.

Click any section of a chart to view the complete list of that item on the Resources List page. Click an item in the bar to view information about that specific resource. You can also use the filters or search options on the top right of the list. These selections and available options will change based on the type of resource you view.

Workloads can be viewed in the following ways:

Resources SummaryDisplays data about Total Resources, Violations, Reserved Instances and Autoscaling
Resources by Regions
Resources by Cloud Services
Resources by Service Category
Tagged and Untagged Resources

Click the Compliance Ops tab to change an AWS Lens or Compliance Framework associated with the Workload.

Compliance Ops tab

The Compliance Ops tab allows you to change your workload to select or remove additional Lenses and Compliance Frameworks. It contains the following sections:

Lenses Summary

Contains Assessment links and downloadable Reports associated with Lenses and Compliance Frameworks.

Integrated reports contain insights and assessments about the selected workload. It walks you through the process of creating and evaluating improvement plans and budgets to plan and track your monthly or yearly spend.

nOps WAFR (AWS Well-Architected) Report

Is created based on your answers to questions about the 6 AWS Well-Architected Framework, ‘pillars’. For more information about the pillars see: The Six Pillars of the Framework.

The report lets you review the state of your workloads and compares them to the latest AWS architectural best practices. The information is based on the AWS tools developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure. To ensure compliance, Well Architected Reviews must be conducted on your production workload on a periodic basis. After the review, prioritize the remediation of any issues identified for you.

Foundational Technical Review Report

This report is based on the lens types selected by you and ensures that your cloud deployment meets the requirements of the lens/es.

Top Recommendations

When you create a workload nOps makes recommendations based on the workload priority, and provides details about what to Fix based on your answers. Click See Details for more information. Click Fix to upload the policy document (or enter a link) for remediation. Then click Upload & Fix to upload the document to the Compliance Documents data repository.

For example if you selected a WAFR (Well-architected framework) lens, nOps may recommend that you upload a reliability incident playbook. Note that creating a workload on an existing AWS account automatically selects the WAFR Review for you.

All AWS accounts should periodically undergo a WAFR review.

Compliance Documents

You can upload documents related to a Workload (or use drag and drop). nOps organizes these documents as you upload them. You can find them easily and in one place. The Compliance Documents repository allows you to track and maintain governance for your cloud environment.

See Also Links:

How to Create a Workload
How to add or change Resources in Workloads

Attach Documents to Workloads#

How to Attach Documents to Workloads

Workloads in nOps help you to group different AWS resources that have been created within your AWS environment. This makes it easier to carry out various assessments on those specific services that make up with Workload. These are the steps to create a workload in nOps

Click on the Workload link on the top menu bar.

On the Workloads page or Dashboard, we will find already created workloads if they exist, and if there are no workloads the page will be blank.

Click on any of the workloads to view details and assessments under that workload

On the page, there are a section labeled Workload Attachments

Click the Upload a file button to pop-up the upload dialog box. Fill out the form with all the details needed.

Click the Choose file option to select the file to upload.

Click the Submit & Upload button to upload the file and submit the contents of the form.

This will take us to the Workload page with the resource that has been uploaded.

Evaluate Risk for a Workload#

Describes the Workload Risk Summary

The Workload Summary displays an assessment toolbar that summarizes the associated risks.

The toolbar can be viewed by all users. Use the following path to see the toolbar. From the top menu bar or from the dashboard:
Click Workloads > Select a Workload from the list > Click the Compliance Ops tab > from the Lenses Summary dialog, click the Assessment button for any lens to see the Assessment page.

This topic contains:

Assessment Page and Toolbar Overview
Overview of Assessments

How Risk Levels are Assigned

The Section tabs

How to use the Interactive Assessment toolbar

Related Functionality for the Assessment Summary page

See Also Links

Assessment Page and Toolbar Overview

The Assessment page evaluates workloads against existing regulations or threat environment policies. Risk framework questions are based on best practices for compliance standards, and regulations for Security and Network. The toolbar displays:

  • Risk levels: Unanswered, Medium, High and None and Not Applicable.
  • The number of questions that fall under each risk.
  • A graph for percentage of Assessment Completed.

Each assessment-type contains section tabs below the toolbar. The data in the section tabs changes when any risk-type is clicked. See Section Tabs for more information.

Overview of Assessments

Following is an overview of the different assessment types available through nOps and the pillars (for WAFR) or, section tabs that appear under different Assessment types. Each of the questions in the section tabs are assigned risk levels. Your answers to these impact compliance with Compliance Frameworks such as HIPAA.

Assessment TypeContains
WAFR (AWS)Security, Cost, Reliability, Operations, Performance and Sustainability pillars. See AWS for more information
SaaSSecurity, Cost, Reliability, Operations and Performance
ServerlessSecurity, Cost, Reliability, Operations and Performance
FTRSecurity, Reliability, and Operations

How Risk Levels are Assigned

A risk level is assigned each time a question is answered, and is based on your selections. Each question has sub-questions. Some of which may have a higher risk value than others. Answering them could change the risk or threat level for that question. Ensure that you answer all questions correctly and to the best of your ability, since they impact the safety, security, and threats to your cloud account. Support your answers by uploading and storing documents to ensure compliance.

If any question is unanswered it is not flagged with a risk assessment. However any unanswered question may pose a greater risk, as the issues it covers are not mitigated in your environment. You can see which questions are unanswered for a specific section by clicking on the Unanswered section. See How to use the Interactive Assessment toolbar for more information.

The Not Applicable section in the toolbar displays a count, for any questions where you have enabled the Question does not apply to this workload toggle. Click the toggle only if the question is not applicable for the selected workload. The question is disabled and removed from the assessment. Removing a question excludes it from the Assessment Completed percentage. This will impact your overall risk levels.

The Section Tabs

Section tabs vary based on the type of assessment. Each tab contains its own set of questions, displays the number of questions in each, and how many of them are answered as seen in the following example.

See How to use the Interactive Assessment toolbar for more information.

How to use the Interactive Assessment toolbar

The Assessments toolbar is ‘clickable’. When a section is clicked, it changes color and displays which section tabs have a related issue.

In the following example, the Medium risk section was clicked, and shows that the Security assessment tab contains 1 question that was assigned a risk level of Medium.

Click through both the toolbar and the section tabs to see the changes within your workload.

Click a risk level, then click the section that contains the associated risk. The question will be highlighted as shown in the following example. Questions also display whether any answers were auto-discovered by nOps.

Related Functionality for the Assessment Summary page

  • Click the Expand All /Collapse All button within each question to view question prompts. These may help you answer the question.
  • Click the More (3 dot) menu to see additional functionality such as the ability to Attach a Document or Add a Label.
  • Click the Export Report button at the top right to export a report-type available in the list.
  • Click Exit Assessment to return to the Workload Summary page.

See Also Links:

How to Create Workloads

Viewing and Managing Workloads

Well-Architected Framework Review

Workloads API#

Workloads API Documentation

API token generation Go to settings -> API Key menu item

Click “Generate API Key”

Copy the key you’ve have.

To use the nOps API, you will need to append your query string in the following manner: https://app.nops.io/nops_api/v1/some_endpoint/?api_key=<YOUR_API_KEY>

List workloads
/nops_api/v1/workload/?api_key=<API_KEY>

Get specific workload data (with answers data) by its ID: /nops_api/v1/workload/<WORKLOAD_ID>/?api_key=<API_KEY>

Get WA summary for a specific workload /nops_api/v1/workload/<WORKLOAD_ID>/rules_summary/?api_key=<API_KEY>

Get a budget for a specific workload /nops_api/v1/workload/<WORKLOAD_ID>/budget/?api_key=<API_KEY>

List attached resources for a specific workload /nops_api/v1/workload/<WORKLOAD_ID>/resources/?api_key=<API_KEY>

WAFR#

Well-Architected Framework Review#

How to perform a WAFR with nOps

In this article you will learn how to perform a Well-Architected Framework Review (WAFR) with nOps. This article contains the following information:

  1. Getting Started
  2. Creating Your First Workload
  3. Defining the Workload Query
  4. Workload Summary View
  5. Running the Well-Architected Framework Review (WAFR)
  6. AWS Well-Architected Tool Integration
  7. IAM Role Updates

Getting Started

AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures and implement designs that can scale over time.

To get started, log in to your nOps account and switch to the Workloads > WAFR section. In the Workloads section, you can:

  1. Create a new Workload.
  2. Filter Workload based on the compliance framework.
  3. Search Workloads based on the Workload name.
  4. Edit or Delete a Workload.
  5. Click on a Workload to go to its summary page.

Creating Your First Workload

To create a Workload, click the + Create New Workload button at the top-right corner of the screen:

When you click “ + Create New Workload” the workload creation panel will appear. Remember that the Workload name cannot be changed once it’s created:

In the Workload creation panel:

  • Workload name — Create a unique identifier/name for your workload. A Workload name cannot be changed once it’s created.
  • Review types — Selectthe type(s) of review/lens you want to apply on this Workload, for WAFR the review type must always be “Well-Architected Framework Review (WAFR)“.
  • Compliance frameworks — Select the desired compliance framework — HIPAA, SOC2, or CIS.
  • Create workload on your AWS account — Toggle this button to allows create and sync this workload in your AWS Well-Architected Tool. If you toggle this button:
    • A new field “AWS account to save WAFR progress” is added to the Create new Workload panel.
    • Well-Architected Framework Review (WAFR) is automatically selected in Review types field.
  • AWS account to save WAFR progress — Select the AWS account your workload will be written to.
  • AWS account to pull resources from — Select the AWS Account(s) where the resources for your workload live. All your AWS account associated with nOps will be shown in this list.
  • Select Environment — This defaults to PRODUCTION and defines the environment from an AWS perspective. Note: Sanctioned Well-Architected Framework Reviews should always be performed on a production workload. You also have the option to create and select a custom environment.
  • Description — A text description of your workload.

Once you’ve filled out the fields, click Save to create a Workload with all the resources in the AWS account specified in AWS account to pull resources from field.

To filter out the resources that are not required for WAFR, use the Specify Workload Resource section. To learn more about how to specify workload resources, see the next section.

Defining the Workload Query

After filling out the information, in the Create new Workload panel click the gray bar Specify Workload Resource to open a query builder:

The nOps query builder allows you to specify rules that define which resources will be added to the workload. You can change the default settings and specify the filters using the drop-downs:

In nOps query builder:

  • Regions — Select the region(s) that nOps will pull resources from. This setting defaults to All.
  • AWS Managed Services — Select the AWS service(s) that nOps will include in your workload. This setting defaults to All.
  • Select Tag Name & Select Tag Value — Select the tags that are associate with the resources that you want to include. Only the resources with the selected tags will be included in the workload. When you select a tag in the Select Tag Name, the values list in the Select Tag Value will update according to the values of the selected tag name.
  • + Add Another Tag — Allows you to add multiple tags.
  • Note — Allows you to add a note against this filter.

Click “Save” to create your workload.

Workload Summary View

In the Workloads section, click on any Workload to go to its summary page. There are 3 tabs in the summary page:

  • Diagram
  • Summary
  • Compliance Ops

When you click on a Workload, you will land on the Diagram tab of the summary page.

Diagram Tab

In the Diagram tab of the summary page, you will see the diagrams of your Workload resources segregated at the level of VPCs, regions, and subnets.

You can click on any VPC, region, subnet, or node to see its details. A summary of violations against your selection is shown on the right side of the diagram:

Summary

In the Summary tab, you have an overview of the your Workload:

In this page you will find:

  • Resource Summary — In this section you will find the total number of resources, the number of resources with violation, the number of reserved instances, and the number of resources that are a part of AutoScaling in this Workload.
  • Resource by Regions — Shows a donut chart with breakdown of all the regions where your resources belong to. You can hover over the chart to see how many resources you have in each region. You can also click on See All to see the full list of resources by region.
  • Resource by Cloud Services — Shows a donut chart with breakdown of the cloud services to which your resource belong to. You can hover over the chart to see how many resources you have in each service. You can also click on See All to see the full list of resources by cloud services.
  • Resource by Service Category — Provides a breakdown of resource by service category in the form of a bar chart with services on the Y-axis and the number of resources on the X-axis. You can also click on See All to see the full list of resources by service category.
  • Tagged and Untagged Resources — In this section you will find the total number of tags used, tagged resources, and untagged resource in the Workload. This section also shows a bar chart with a breakdown of all the values against each tag. You can hover over the chart to see the values of each tag. You can also click on See All to see the full list of Tags and their values.

ComplianceOps

In the ComplinaceOps tab of the summary page, you will find the details of your assessment:

In this tab, you have:

  • Lenses Summary — In the lenses summary you have the option to add/remove the review lenses and the compliance frameworks. In this section, you will see the assessments and reports according to your selection, along with the Assessment Completed percentage. If you change you selection, the assessments and reports will change accordingly. In this section you also have two buttons:
    • Assessment — Takes you to your assessment section. See Running the Well-Architected Framework Review (WAFR) to learn more.
    • View Report — Takes you to the assessment report page.
  • Top Recommendations — Show you a list of top recommendation to resolve the violations.
  • Compliance Documents — Show you the documents you attach to each recommendation in the Top Recommendations section.

Running the Well-Architected Framework Review (WAFR)

In this assessment section, you may notice that the assessment is at a completion percentage greater than 0%. This is because that nOps uses its rules engine to automatically discover information about the Workload and answers some the questions in the assessment:

Note: Each question specifies whether this is considered a High, Medium or Low risk question.

The assessment questioner is divided in to the 6 pillars of the WAFR assessment. To switch to a specific assessment pillar, click on the desired pillar:

Alongside the pillar name can also see how many question are answered and how many are still unanswered.

You can also click on the vulnerability levels to see exactly how many vulnerability are of the choose level:

For each question in the WAFR assessment, nOps will either automatically detect the answer to the question or allow you to answer it manually. Clicking on the checkbox(es) in each section will designate that your workload meets or exceeds the particular requirements. You can add notes to a particular question by clicking “Add Note.” Hover the mouse over a question to view a context menu that gives you several options.

  • Autodiscovery Details – Information about what nOps was able to detect in your account.
  • Attach Resources – Allows you to attach specific resources to a question. These resources will be included in the report generated by nOps.
  • Create Jira Ticket – If you have integrated an instance of Jira Cloud, you can open Jira issues from nOps. Use this option to assign tasks while completing your WAFR.
  • Show Description – Shows a description of the question.

To attach a Query to you answer as an evidence, click on the + Attach Query:

You can also click on the details icon to open a list of other options that you can use to enrich your answers:

After you have answered each question, if you are working on this assessment for the first time, you can click “Submit Report”. This will enable you to export the report to AWS as part of the WAFR. Click Export Report for and select the desired export format.

Clicking “Exit Assessment” will return you to the summary screen where you can upload any additional documentation, see the assessment completion percentage, and export the report of the assessment.

AWS Well-Architected Tool Integration

While creating you Workload in nOps, if you selected “Create workload on AWS account“, your Workload will be synchronized to the AWS Well-Architected Tool, each Workload in AWS will be listed as if you had created it from the tool itself.

Changes made from nOps can be synchronized to the AWS Well-Architected Tool by clicking Update Report.

IAM Role Updates

If you are using an existing nOps account, you will receive notifications that nOps has added new AWS IAM policies to enable AWS Well-Architected Tool integration. Please update your IAM policies to allow nOps to access the AWS Well-Architected Tool in your account. For more information, you can watch this short video.

Get notified about new AWS IAM policies on nOps – YouTube

Well-Architected Framework Report#

View the WAFR for a User account or Client

How to view Well-Architected Framework Report

Log in to your nOps account

This will lead you to the landing page dashboard that shows a summary of different metrics.

Navigate to the main menu and to the Reports menu item. Click the menu item to view the list. Click the WAFR Report

This will lead to the WAFR Report report page showing the Well-Architected Framework Report page

Export an In-Progress WAFR Report#

Send a Workload that is still in progress to a PDF

How to Export a WAFR Report In Progress

When a Workload has been created, the WAFR Report when still in progress can be exported. These are the steps to export the report options selected when the assessment is still in progress.

Click on the Workload link on the top menu bar.

This will lead us to the Workloads page

Click on the particular workload that you need to export that is In Progress

  1. This will open up the details page for the WAFR report, then click the Update Access button on the top right corner of the screen

This will open up the WAFR Assessment page

Click the Download Report Report button on the top left corner of the screen

Invite Customer for WAFR Assessment#

Invite clients from the Partner Dashboard to sign up for nOps

How to Invite a Customer to complete a WAFR Assessment

  1. Login to the nOps Partner Dashboard here
  2. From the Dashboard click on Clients.
  3. From the Settings pane, Click Manage Clients.
  4. At the Manage Clients page click New Client and select Invite a client for a well-architected assessment.
  5. Enter information about the customer on the Invitationdialog, then click the Invite client button to invite the customer.
    The customer receives an email containing a link to Sign Up.
  6. The customer must click the Sign up now button, and enter information on the form to complete the Sign Up process.
    This will automatically log a new customer into their nOps account.

To Add an AWS Account

From the Partner pageSwitch to the user account page by clicking here.

To add an AWS account, to the customers nOps account, click on the +Add New AWS Account

Select the nOps Wizard Setup option, then click the Next button.

Enter a name to represent the AWS account within nOps, which is also called an nOps project, and the name of the S3 Bucket that has been created for the nOps account. Then click on Setup Account button

This will launch the CloudFormation Console in your AWS account, if you are logged-in it will direct you to the CloudFormation Console with the stack creation wizard.

Click the check box to acknowledge AWS Cloudformation will create IAM resources, then click on Create Stack to create the Cloudformation stack that is needed to link your AWS account to the nOps project.

The stack creation will take a couple of minutes. When it is complete. When the Cloudformation stack is completely created, you will get a notification email that the account is ready to be used.

The account is ready for a Well-Architected Framework Review.

FTR#

Foundational Technical Review#

Getting Started

For all AWS Partner-hosted solutions, passing the AWS Foundational Technical Review (FTR) requires you to complete an AWS Well-Architected Review (Review) to identify opportunities for improvement across all of the Well-Architected pillars.

You do not need to complete the remediation of identified issues to pass the FTR, only execute the Review. You can complete this assessment using either nOps or the AWS Well-Architected Tool accessible from the AWS Management Console.

PREPARATION CHECKLIST: Before you begin, you will need to gather the following:

  • Access to the master payer account if you are using organizations.
  • Permission to create and run an AWS CloudFormation stack.
  • Permission to create AWS Identity and Access Management (IAM) roles in your account.
  • Friendly account name.
  • The name of an Amazon S3 bucket where your AWS Cost and Usage Reports (CURs) will be written. (We will create one if one does not exist.)
  • CURs are enabled in the account.

To Get started Click Here

Signing Up for nOps

Step 1: Once you’ve clicked on the link above (nOps Sign Up), you’ll be taken to the FTR user registration page.

Complete the signup process by entering your business email, company name, etc. and clicking “Sign Up.” Doing so will cause a verification email to be sent to you — please click it to verify your email address. If you do not receive the verification email, please check your Spam folder.

Congrats! You are now registered as an nOps user.

Adding an AWS Account

Connect your AWS account(s) where the resources in your workload live.

*You will need to have access to the master payer account if you are using organizations. Additionally, you will need permissions to create and run a CloudFormation stack and create IAM roles in your account.

Click + Add AWS Account on the right.

Or, click on your username in the top right and go to: Settings > AWS Accounts Click “Add a new AWS account.”

nOps has two setup options:

  • nOps Wizard Setup (recommended) – nOps will create a CloudFormation stack using your AWS credentials.
  • Manual Setup – Used to reconfigure specific AWS accounts.

When adding a new AWS account, nOps will ask for the friendly name and the name of an S3 bucket where your CURs will be written. If you already have an S3 bucket for your CURs, you can add it here. Otherwise, nOps will attempt to create an S3 bucket.

Click “Setup Account” to be redirected to your AWS account.

*Please remember to log in to the AWS account that you want nOps to collect data from.

Agree to the CloudFormation template being able to create an IAM role and then click Create Stack.

Step 2 Once you have successfully added your AWS account to nOps, it will start the data ingestion process.

This process can take two to four hours, depending on the size of your AWS account. You should be able to see your AWS account in Settings > AWS Accounts > Active AWS Accounts.

AWS Accounts are now synced when this screen appears:

Workloads

A workload, in nOps, is a dynamic collection of AWS resources. Workloads allow you to group and manage only the resources that match a particular query. Click “Workloads” in the top nav bar to be taken to the Workloads view.

Creating Your First Workload

Step 3 If this is the first time you have created a workload, you will be able to click “Create New Workload” in the middle of the screen. After that, the Create New Workload button will move to the top right of the window.

When you click “Create New Workload,” the workload creation pane will slide into view.

  • Workload Name – This is the unique identifier for your workload.
  • AWS Account(s) – The AWS Account(s) where the resources for your workload live.
  • Workload Type – Defines the overall workload type. Please select “Well-Architected.”
  • Lens – nOps supports the AWS lens concept. Please select FTR for the lens type.
  • Environment – This defaults to Production and defines the environment from an AWS perspective.
  • Jira project – If you are using the built-in Jira integration, you will be able to select a Jira project to integrate with.
  • Description – A text description of your workload.

*At this time, creating workloads in your AWS account is not fully functional. Clicking the option can cause errors in your workload creation.

Defining the Workload Query

Step 4 After you have filled out the metadata for your workload, you can click the gray bar that says, “Specify Workload Resource,” causing the query builder to slide into view. nOps allows you to specify rules that define which resources will be added to the workload.

  • Regions – The regions that nOps will pull resources from. This defaults to All.
  • AWS Managed Services – The AWS services that nOps will include in your workload. This defaults to All.
  • VPC – The VPCs that contain the resources that nOps will include in your workload. This defaults to All.
  • Tags – Select tags to be assigned to the resources you want to include, e.g., “ApplicationA.”

Click “Save” to create your workload.

Workload Summary View

Step 5 After you have created your workload, you will see the Workloads view. Here you can see a list of all workloads you’ve created, edit the query that builds your workload, and delete your workload.

Click on the workload to be taken to the Workload Summary view. In the Workload Summary view, you will see two sections.

Assessment Summary – An overview of how far into the assessment you are. – Workload Attachments – Any files and/or links attached to the workload are added to the report generated by nOps when the assessment is completed.

Running the FTR Assessment

You might notice that the assessment is at a completion percentage greater than 0. This is normal and due to the fact that nOps uses its rules engine to discover information about the workload automatically. Click “Start Assessment” to begin the FTR Assessment.

For each question in the FTR, nOps will either automatically detect the answer to the question or allow you to answer it manually. Clicking on the box(es) in each section will designate that your workload meets or exceeds the particular requirements. You can add notes to a particular question by clicking “Add Note.” Hovering the mouse over the question will raise a context menu that gives you several options.

  • Autodiscovery Details – Information about what nOps was able to detect in your account.
  • Attach Resources – Allows you to attach specific resources to a question. These resources will be included in the report generated by nOps.
  • Create Jira Ticket – If you have integrated an instance of Jira Cloud, you will be able to open Jira issues from nOps. Use this option to assign tasks while completing your FTR.
  • Show Description – Shows a description of the question.

After you have answered the question, you can click “Submit Report,” enabling you to export the report to AWS as part of the FTR. Clicking “Exit Assessment” will return you to the summary screen where you can upload any additional documentation, see the assessment completion percentage, and export the report of the assessment.

Submitting the Package to AWS

Step 6 Once you have completed the FTR Assessment, you will need to export your completed FTR Report and send it to your Partner Solutions Architect via email. You will also need to include the following information:

  • A brief description of your solution.
  • An architecture diagram illustrating major system components and their network communication paths (examples of reference architecture diagrams here).
  • Is the solution currently generally available to customers?
  • Is the solution in AWS GovCloud? If yes, what is the reason it is in AWS GovCloud?

Next Steps

nOps enables you to complete your FTR efficiently, but it can do far more than that. nOps lets you monitor, analyze, and manage an AWS Well-Architected infrastructure that is cost-optimized, secure, reliable, efficient, and operationally excellent — and help keep it that way through continuous compliance.

Click here to learn all you can do with nOps.

AWS Foundation Technical Review (FTR) Report#

nOps is used by AWS partners in the ISV Program to get their FTR accreditation and access program benefits. This is a sample report.

The nOps FTR Report can be exported from nOps to be emailed by the ISV partner to their AWS representative to complete the FTR process and access ISV program benefits:

The nOps FTR Report gives you access to:

  1. At-a-glance summary
  2. List of attached AWS resources with $costs
  3. 14 detail sections for the complete FTR process.

nOps FTR Report at-a-glance summary

nOps FTR attached AWS resources list

Note: The AWS ARNs for each AWS resource have been redacted.

nOps FTR Report detail sections

These are the sections that the ISV has to complete to pass the FTR process.

1. Support Level

2. AWS Well-Architected Review

3. AWS Root Account

4. AWS Accounts

5. Communications from AWS

6. CloudTrail

7. Identity and Access Management

8. Backups and Recovery

9. Disaster Recovery

10. Amazon S3 Bucket Access

11. Cross-Account Access

12. Sensitive Data

13. Protected Health Information

14. Regulatory Compliance Validation Process

Foundational Technical Review Question Descriptions#

Learn what each question in the FTR is describing

Foundational Technical Review Question Descriptions

The Foundational Technical Review (FTR) Lens provides a set of specific questions for independent software vendors (ISVs) to perform a workload assessment. To understand what the question is asking, below is a definition of each question in the FTR.

List of Questions and the descriptions:

1. Support Level

Enable AWS Business Support (or greater) on all production AWS accounts.

AWS Support provides a mix of tools and technology, people, and programs designed to proactively help you optimize performance, lower costs, and innovate faster. AWS Business Support provides additional benefits including access to AWS Trusted Advisor and AWS Personal Health Dashboard and faster response times. Subscribing to AWS Business Support or greater for all production accounts is a requirement to successfully complete the FTR. Business Support billing calculations are performed on a per-account basis. Monthly charges are based on each month’s AWS usage charges, subject to monthly minimum. For latest pricing information see Premium Support Pricing.

2. AWS Well-Architected Review

Conduct a Well-Architected Review with FTR Lens on the production workload on a periodic basis(minimum once every year).

The AWS Well-Architected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices. The tool is based on the AWS Well-Architected Framework, developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure. Well Architected Reviews must be conducted on the production workload on a periodic basis. After conducting the review, you should prioritize the remediation of any identified issues according to your business priorities. It is not a requirement to complete the remediation of any issues identified in the review other than the requirements defined in this checklist.

3. AWS Root Account

Use root user is only by exception.


The root user has unlimited access to your account and its resources, and using it only by exception helps protect your AWS resources. The AWS root user must not be used for everyday tasks, even administrative ones. Instead, adhere to the best practice of using the root user only to create your first AWS Identity and Access Management (IAM) user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks. To view the tasks that require you to sign in as the root user, see AWS Tasks That Require Root User. To learn more about FTR AWS root account protection and IAM configuration requirements, watch Baseline Bits: 03 AWS Root Account Protection and AWS Identity and Access Management Configuration.

Remove access keys for the root user.

Programmatic access to AWS APIs should never use the root user. It is best not to generate a static access key for the root user. If one already exists, you should transition any processes using that key to use temporary access keys from an AWS Identity and Access Management (IAM) role, or, if necessary, static access keys from an IAM user.

Enable multi-factor authentication (MFA) on root user.

If an account is not managed by AWS Organizations, enabling MFA provides an additional control for account sign-in. Because your root user can perform sensitive operations in your account, adding an additional layer of authentication helps you to better secure your account. Multiple types of MFA are available, including virtual MFA and hardware MFA. To learn more about FTR AWS root account protection and AWS Identity and Access Management configuration requirements, watch Baseline Bits: 03 AWS Root Account Protection and AWS Identity and Access Management Configuration.

Create an incident response (IR) runbook for root account credential misuse.

A runbook that details an appropriate response to root account credential misuse enables you to promptly act in the event that your root user becomes compromised. In the event your root account credentials are inaccessible, you will need to either change your AWS account root user password, or contact account and billing support through the Unable to Sign in & Submit Billing or Account Request page.

4. AWS accounts

Use separate accounts for production and non-production stages.

Multiple AWS accounts allow you to separate data and resources, and enable the use of Service Control Policies to implement guardrails. For example, users outside of your account do not have access to your resources by default. Similarly, the cost of AWS resources that you consume is allocated to your account. AWS recommends that you use multiple AWS Accounts to separate workloads and workload stages, such as production and non-production.

5. Communications from AWS

Configure AWS account contacts.

If an account is not managed by AWS Organizations , alternate account contacts help AWS get in contact with the appropriate personnel if needed. Configure the account’s alternate contacts to point to a group rather than an individual. For example, create separate email distribution lists for billing, operations, and security and configure these as Billing, Security, and Operations contacts in each active AWS account. This ensures that multiple people will receive AWS notifications and be able to respond, even if someone is on vacation, changes roles, or leaves the company.

Set account contact information including the root user email address to email addresses and phone numbers owned by your company.

Using company-owned email addresses and phone numbers for contact information enables you to access them even if the individuals whom they belong to are no longer with your organization.

6. CloudTrail

Configure CloudTrail in all AWS Accounts and in all Regions.

AWS CloudTrail enables governance, compliance, operational auditing, and risk auditing of your AWS account. To meet FTR requirements, you must have management events enabled for all AWS accounts and aggregate these logs into an Amazon Simple Storage Service (Amazon S3) bucket owned by a separate AWS account. The first copy of management events in each region is delivered free of charge and you only pay for S3 storage cost. Additional copies of management events will incur charges. Should you enable data events, you will incur charges for each copy along with associated S3 storage costs. For the latest pricing information see CloudTrail pricing. To learn more about FTR audit and logging requirements, watch Baseline Bits 05: Audit and Logging on AWS Accounts.

Store logs in a separate administrative domain with limited access (e.g. Separate AWS Account or an equivalent AWS Partner solution).

Configuring the logs to flow to a central account (i.e. separate AWS Account that is only intended for log storage and limited access ) or an equivalent AWS Partner solution protects the logs from manipulation or deletion. To learn more about FTR audit and logging requirements, watch Baseline Bits 05: Audit and Logging on AWS Accounts.

Protect log storage from unintended access (e.g. MFA-delete, versioning on S3, object lock, or an equivalent solution)

Protecting log storage locations from unintended access helps with avoiding any unintended changes to log files.

Enable CloudTrail log file integrity validation.

A validated log file using integrity validation enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log file has been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time. CloudTrail log file integrity validation uses industry-standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally unfeasible to modify, delete or forge CloudTrail log files without detection. For more information, see Enabling Validation and Validating Files.

7. Identity and Access Management

Enable multi-factor authentication for all Human Identities with AWS access.

You must require any human identities to authenticate using MFA before accessing your AWS accounts. Typically this means enabling MFA within your corporate identity provider. If you have existing legacy IAM users you must enable MFA for console access for those principals as well. Enabling MFA for IAM users provides an additional layer of security. With MFA, users have a device that generates a unique authentication code (a one-time password, or OTP). Users must provide both their normal credentials (user name and password) and the OTP. The MFA device can either be a special piece of hardware, or it can be a virtual device (for example, it can run in an app on a smartphone). To learn more about FTR IAM configuration requirements, watch Baseline Bits: 03 AWS Root Account Protection and AWS Identity and Access Management Configuration. Please note that Machine Identities do not require MFA.

Rotate credentials regularly.

When you cannot rely on temporary credentials and require long term credentials, rotate the IAM access keys regularly. If an access key is compromised without your knowledge, you limit how long the credentials can be used to access your resources. For information about rotating access keys for IAM users, see Rotating Access Keys. To learn more about FTR IAM configuration requirements, watch Baseline Bits: 03 AWS Root Account Protection and AWS Identity and Access Management Configuration.

Use strong password policy.

Enforce a strong password policy, and educate users to avoid common or re-used passwords. For IAM users, you can create a password policy for your account on the Account Settings page of the IAM console. You can use the password policy to define password requirements, such as minimum length, whether it requires non-alphabetic characters, and so on. For more information, see Setting an Account Password Policy for IAM Users. To learn more about FTR IAM configuration requirements, watch Baseline Bits: 03 AWS Root Account Protection and AWS Identity and Access Management Configuration.

Create individual identities (no shared credentials) for anyone who needs AWS access.

By creating individual identities for people accessing your account, you can give each user a unique set of security credentials and permissions. Individual users provide the ability to audit each users activity. To learn more about FTR IAM configuration requirements, watch Baseline Bits: 03 AWS Root Account Protection and AWS Identity and Access Management Configuration.

Use IAM roles and its temporary security credentials to provide access to third parties.

Do not provision IAM users and share those credentials with people outside of your organization. Any external services that need to make AWS API calls against your account (e.g. a monitoring solution that accesses your account’s AWS CloudWatch metrics) must use a cross-account role. See documentation on providing access to AWS accounts owned by third parties for more information.

Grant least privilege access.

You must follow the standard security advice of granting the least privilege. Grant only the access that identities require by allowing access to specific actions on specific AWS resources under specific conditions. Rely on groups and identity attributes to dynamically set permissions at scale, rather than defining permissions for individual users. For example, you can allow a group of developers access to manage only resources for their project. This way, when a developer is removed from the group, access for the developer is revoked everywhere that group was used for access control, without requiring any changes to the access policies. To learn more about FTR IAM configuration requirements, watch Baseline Bits: 03 AWS Root Account Protection and AWS Identity and Access Management Configuration.

Manage access based on life cycle.

Integrate access controls with operator and application lifecycle and your centralized federation provider and IAM. For example, remove a user’s access when they leave the organization or change roles

Audit identities quarterly.

Auditing the identities that are configured in your identity provider and IAM helps ensure that only authorized identities have access to your workload. For example, remove people that leave the organization, and remove cross-account roles that are no longer required. Have a process in place to periodically audit permissions to the services accessed by an IAM entity. This helps you identify the policies you need to modify to remove any unused permissions, see IAM access advisor.

Do not embed credentials in application code.

Ensure all credentials used by your applications (e.g. IAM access keys, database passwords, etc.) are never included in your application’s source code or committed to source control in any way.

Store secrets in specialized service.

Where you cannot use temporary credentials, such as tokens from AWS Security Token Service, storing your secrets, such as database passwords, using a service like AWS Secrets Manager or an equivalent AWS Partner solution, helps secure your credentials.

Encrypt all end user/customer credentials and hash passwords at rest.

If storing end user/customer credentials in a database that you manage, encrypt credentials at rest and hash passwords. As an alternative, AWS recommends using a user identity synchronization service, such as Amazon Cognito or an equivalent AWS Partner solution.

8. Backups and Recovery

Perform data backup automatically.

You must perform regular backups to a durable storage service. Backups ensure that you have the ability to recover from administrative, logical, or physical error scenarios. Configure backups to be taken automatically based on a periodic schedule, or by changes in the dataset. RDS instances, EBS volumes, DynamoDB tables, and S3 objects can all be configured for automatic backup. AWS Marketplace solutions or third-party solutions can also be used. To learn more about FTR backup and recovery requirements, watch Baseline Bits 07: Backups and Disaster Recovery.

Perform periodic recovery of the data to verify backup integrity and processes.

Validate that your backup process implementation meets your recovery time objectives (RTO) and recovery point objectives (RPO) by performing a recovery test both on a periodic basis and after making significant changes to your cloud environment. AWS provides resources to help you manage backup and restore of your data. To learn more about FTR backup and recovery requirements, watch Baseline Bits 07: Backups and Disaster Recovery.

9. Disaster Recovery

Define a Recovery Point Objective (RPO) according to your organizational needs.

Your data loss tolerance is the basis of your backup strategy and frequency. Recovery Point Objective (RPO) defines your data loss tolerance in-terms of time. Define a Recovery Point Objective (RPO) according to your organizational needs.

Establish a Recovery Time Objective (RTO) to meet business needs and expectations. This should be on the order of minutes for all components that are critical for providing service to your customers but should never exceed 24 hours.

Recovery Time Objective (RTO) defines your tolerance for downtime. The FTR requirement is for the RTO to be less than 24.

Test disaster recovery implementation to validate the implementation.

Test failover to DR to ensure that RTO and RPO are met, both periodically and after major updates. The DR test must include accidental data loss, instance, and Availability Zone (AZ) failures. At least one DR test that passes RTO and RPO requirements must be completed prior to FTR approval.

10. Amazon S3 Bucket Access

Review all Amazon S3 buckets to determine appropriate access levels.

You must ensure that buckets that require public access have been reviewed to determine if public read or write access is needed, and appropriate controls are in place to control public access. When using AWS, it’s best practice to restrict access to your resources to the people that absolutely need it (the principle of least privilege). To learn more about FTR S3 bucket access management requirements, watch Baseline Bits 06: S3 Bucket Access Management and Configuration.


Restrict access to S3 buckets that should not have public access.

You must ensure that buckets that should not allow public access are properly configured to prevent public access. By default, all S3 buckets are private, and can only be accessed by users that have been explicitly granted access. Most use cases won’t require broad-ranging public access to read files from your S3 buckets, unless you’re using S3 to host public assets (for example, to host images for use on a public website), and it’s best practice to never open access to the public. To learn more about FTR S3 bucket access management requirements, watch Baseline Bits 06: S3 Bucket Access Management and Configuration.

Implement monitoring and alerting to identify when S3 buckets become public.

You must have monitoring or alerting in place to identify when S3 buckets become public. Trusted Advisor checks for S3 buckets that have open access permissions. Bucket permissions that grant List access to everyone can result in higher than expected charges if objects in the bucket are listed by unintended users at a high frequency. Bucket permissions that grant Upload/Delete access to everyone create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. The Trusted Advisor check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions. You can also use AWS Config to monitor your S3 buckets for public access. For more information on using AWS Config to monitor S3, please take a look at this blog. To learn more about FTR S3 bucket access management requirements, watch Baseline Bits 06: S3 Bucket Access Management and Configuration.

11. Cross-Account Access

Use cross-account roles to access customer accounts.

Cross-account roles reduce the amount of sensitive information AWS Partners need to store for their customers. To learn more about FTR cross-account access requirements, watch Baseline Bits 04: Using AWS Identity and Access Management Roles for Cross-Account Access.

Provide guidance or an automated setup mechanism (e.g. AWS CloudFormation template) for creating cross-account role with minimum required privileges.

The policy created for cross-account access in customer accounts must follow the least privilege principle. The partner must provide a role policy document or an automated setup mechanism (e.g. an AWS CloudFormation template) for the customers to use to ensure that the roles are created with minimum required privileges.

Use external ID with cross-account roles to access customer accounts.

The external ID allows the user that is assuming the role to assert the circumstances in which they are operating. It also provides a way for the account owner to permit the role to be assumed only under specific circumstances. The primary function of the external ID is to address and prevent the confused deputy problem.

Use a value you generate (not something provided by the customer) for the external ID.

When configuring cross-account access using IAM roles, you must use a value you generate for the external ID, instead of one provided by the customer, to ensure the integrity of the cross account role configuration. A partner-generated external ID ensures that malicious parties cannot impersonate a customer’s configuration and enforces uniqueness and format consistency across all customers. If you are not generating an external ID today we recommend implementing a process that ensures a random unique value, such as a Universally Unique Identifier, is generated for the external ID that a customer would use to setup a cross account role.

Ensure all external IDs are unique.

The external IDs used must be unique across all customers. Re-using external IDs for different customers does not solve the confused deputy problem and runs the risk of customer A being able to view data of customer B by using the role ARN of customer B along with the external ID of customer B. To resolve this, we recommend implementing a process that ensures a random unique value, such as a Universally Unique Identifier, is generated for the external ID that a customer would use to setup a cross account role.

Provide read-only access to external ID to customers.

Customers must not be able to set or influence external IDs. When the external ID is editable, it is possible for one customer to impersonate the configuration of another. When the external ID is editable Customer A can create a cross account role setup using customer B’s role ARN and external id, granting customer A access to customer B’s data. Remediation of this item involves making the external ID a view only field ensuring that the external ID cannot be changed for purposes of impersonating the setup of another customer.

Deprecate any historical use of customer-provided IAM credentials.

If your application provides legacy support for the use of static IAM credentials for cross-account access, the application’s user interface and customer documentation must make it clear that this method is deprecated and the use of a cross-account IAM role is recommended. Existing customers should be encouraged to switch to cross-account role based-access, and collection of credentials should be disabled for new customers. To learn more about FTR cross-account access requirements, watch Baseline Bits 04: Using AWS Identity and Access Management Roles for Cross-Account Access

12. Sensitive Data

Identify sensitive data (e.g. Personally Identifiable Information (PII) and Protected Health Information (PHI)).

Data classification enables you to determine which data needs to be protected and how. Based on the workload and the data it processes, identify the data that is not common public knowledge.

Encrypt all sensitive data at rest.

Encryption maintains the confidentiality of sensitive data even when it gets stolen or the network through which it is transmitted becomes compromised.

Only use protocols with encryption when transmitting sensitive data.

Encryption maintains data confidentiality even when the network through which it is transmitted becomes compromised.

Log access to sensitive data comprehensively throughout the system

Visibility into any unexpected access to sensitive data provides you with the opportunity to perform necessary corrective actions to further protect your data. Scope your systems for components that store sensitive data. Implement application- and resource-level auditing and logging to monitor all access to data and quickly identify unauthorized access.

13. Protected Health Information

If the solution handles Protected Health Information (PHI), the partner must have a Business Associate Addendum (BAA) in place with AWS for every AWS account with PHI

Have a Business Associate Addendum (BAA) in place with AWS for every AWS account with Protected Health Information (PHI).

Only use services in the HIPAA Eligible Services Reference for solutions that handle PHI.

Solutions handling PHI must use services in the HIPAA Eligible Services Reference.

14. Regulatory Compliance Validation Process

Establish a process to ensure that all required compliance standards are met.

If you advertise that your product meets specific compliance standards, you must have an internal process for ensuring compliance. Examples of compliance standards include PCI DSS, FedRAMP, and HIPAA. Applicable compliance standards are determined by various factors, such as what types of data the solution stores or transmits and which geographic regions the solution supports.

AWS MAP#

Automated Tagging for AWS Migration Assistance Program#

If you’re participating in AWS’s Migration Assistance Program, you probably know that getting AWS credits for resources that you’ve migrated into AWS requires that you apply the right tags to those resources – and you want to apply those tags as early as possible in order to earn the credits as early as possible. nOps MAP features help automate this process, ensuring that you get the right tags applied to your resources as early as possible and get the maximum credit from the MAP program.

These MAP features can aid you if you’re a new AWS customer, just starting out with AWS and migrating your workloads into the AWS cloud, or an existing AWS customer moving workloads from on-prem data centers or from other cloud providers.

And note that qualifying for the credit also requires an AWS Well Architected Framework Review, so that you get the most from your cloud deployment – and that review is also facilitated by nOps tools.

nOps Automated Tagging for MAP-Program Credits

In order to get your AWS MAP credits, all your resources must be tagged according to the MAP rules. If your resources are not tagged properly, you won’t get the credits – and credits will only accrue on spends that occur after the tags are applied. nOps helps you to list the resources migrating for your various workloads, identify those that have yet to be tagged, apply the right tags, and then track your AWS incentive credits over the course of your migration. And in so doing, you’ll be setting up all your workloads for the required Well Architected Review, which nOps can also help you conduct.

To use the nOps MAP facitliy, from the nOps dashboard go to Workload > AWS MAP:

Defining Your Migration Projects

The nOps MAP facility starts with the page AWS MAP 2.0 Summary, where you’ll find three sections:

  • QTD (Overall Migration Resource Spend Summary) – gives quarter to date total spending in AWS on migrated resources (tagged and untagged), untagged resources, and earned MAP credits.
  • Your Current Tagging Status – shows the % of your migrated resources are tagged with map-migrated in order to get credits, and what % are untagged. It also shows the resource spend equivalent to the percentages.
  • List of Migration Projects – shows projects that you’ve defined. Each project corresponds with a MAP Migration Contract from AWS and is identified by the project number in that contract, of the form MPExxxxx (where xxxxx are digits).

To create/add a MAP migration project in nOps, click + Create MAP Project at the upper right; it will bring up this dialog:

In the dialog, fill out the details:

  • Project Name — Choose one that is meaningful to you.
  • Project ID — The project ID starts with “MP” followed by 5 digits (MPExxxxx, where xxxxx are digits). If you are not sure where to find your MAP Project ID, refer to the top of the first page of your MAP contract.
  • Start Date and End Date — They should be per your MAP contract terms from AWS.
  • Tag Key — Prefilled for you.
  • Server ID — Must be the exact server-ID string from AWS. If you do not already have an AWS server ID, click the blue Generate Server ID button, which will take you to the AWS facility for creating one – be sure and copy the ID that you create there in order to come back to this dialog and paste it into this Server ID field.
  • Workload (optional) — As identified in nOps’ Well Architected Framework Review facility. You can select an existing workload that you’ve created in nOps or you can create a new workload for the MAP project. To create a new workload, click Create Workload, this will take you to the Workloads page in nOps. With workloads, you have the option to define a scope for where nOps should look for tagged and untagged resources.

Once you’ve entered all the details for this MAP project, click Create. The project you just created will show up in the List of Migration Projects on the AWS MAP 2.0 Summary page:

Every MAP project that you create this way in nOps will correspond to a MAP Migration Contract.


Note: Creating a migration project will not tag the resources of that project. To tag resources, continue to the next section, “Tagging Resources”.


Tagging Resources Within Each Project to Earn MAP Credits

nOps provides an easy automated way for you to tag any untagged resources.

Once your MAP migration projects are defined, each with its associated resources via the server ID, nOps will show you all the AWS resources associated with the servers you have identified for the migration projects. As more and more resources from the servers/storage units are added, you will periodically come back to nOps to tag all untagged resources.

To tag resources of a project:

  1. Click ➡️ in the Action column of the List of Migration Projects section:
    This will take you to the Migration Details of the migration project.
  2. Scroll down to the List of resources section. This list contains all the resources associated with the migration project:
  3. Select the resource(s) you want to tag, and click + Add Migration Tag and then + Tag Now:
  4. Click Yes:

Note:

Once you click Yes, nOps will redirect you to AWS, where you will tag the selected resources. nOps will pre-fill Tag Key, Tag Value, and ARN, which is some of the information required to tag the selected resource.

To successfully tag a resource in AWS, you (the customer) will need two permissions from your AWS Admin – permission for the selected resources and the permission to tag resources.


Managing Your Migration Projects

Once you’ve defined one or more migration projects in nOps, you will see each of those projects listed at the bottom of the AWS MAP 2.0 Summary page where you started:

You can access and manage any migration project by clicking the ➡️ button in the Action column of the List of Migration Projects section.

In the AWS MAP 2.0 Summary page you will also find a collective summary of all of your migration projects in the form of:

  • Overall Migration Resource Spend Summary — Shows quarter to date (QTD) migrated resource spend, untagged resource spend, and estimated credit.
  • Your Current Tagging Status — Shows a bar chart for how much of your spending has been tagged and how much remains untagged in the current QTD, across all of your migration projects.

Note: Once a quarter ends and the next one starts, you can still tag the untagged resources from the previous quarter but the previous quarter’s cost will not change.


Navigate and Track Your Incentive Credits

In the AWS MAP 2.0 Summary page, again note the arrow at the right of each migration project line, in the Action column. Clicking that arrow brings up the details of that migration project:

The details page shows the performance of a specific migration project, and it is divided into four distinct sections. At the top of the details page you will see the Migration ID, Start Date, End Date, Tag Key, Server ID, and Workload associated with the migration project you just clicked:

The next three sections are all about navigating your incentive credits, tracking your tagging status, and tagging untagged resources:

  • MAP Tracker
  • Your Current Tagging Status
  • List of Resources

MAP Tracker

MAP Tracker offers a way for you to find the scope of your project and navigate your incentive credits over the entire period of your migration, with the help of:

  • MAP Spend: Total Spend/Cost excluding CUR line-item types Tax, Credit, and Refunds.
  • Trailing Twelve Month (TTM) MAP Spend: Sum of MAP spend in the existing quarter and the previous three quarters.
  • General Spend Growth: Refer to the MAP Credit Calculations section of your AWS MAP plan.
  • Multiply by 25%: 25% delta value of General Spend Growth.
  • Earned MAP Incentives: Credits that you have earned from AWS with the MAP plan.
  • MAP Credits Disbursed: Credits that you have received from AWS. You only receive your earned credits in the month after a quarter ends.
  • Cumulative MAP Credits Disbursed: Cumulative sum of all past quarters and the current quarter (-) MAP credits disbursed.

Current Tagging Status

This section is similar to the Current Tagging Status of the AWS MAP 2.0 Summary page, which summarizes the tagging status of all your migration projects.

The Current Tagging Status on this details page shows the tagging status for only this specific migration project. It shows a bar chart for how much of your spending has been credited against tagged resources, and how much remains against untagged resources, in the current QTD, for this specific migration project.

List of Resources

This section shows all the resources, tagged or untagged, within the migration project. You can filter the resources based on the account associated with the resource and the tagging status (tagged, untagged) of the resource. You also have the option to search any specific service with the help of the search box at the top right of the list.

To tag untagged resources, select the untagged resources and click the + Add Migration Tag button. To learn more about the process in detail, see the Tagging Resources Within Each Project to Earn MAP Credits section of this article.

The resource table gives:

  • Service Name — AWS service that contains the resources.
  • Resource Name — Resource associated with the AWS service.
  • Account — The AWS account associated with the resource.
  • Region — Region of the resource.
  • QTD Cost — Quarter to date cost of the resource.
  • Action — Click the Action to see the Details, Cost History, and Config History of the resource.

The resource table will update periodically as more resources are added. Continue visiting the List of Resources section to tag all newly added untagged resources for the duration of the migration project.


Note:

Once you tag an untagged resource, it will take approximately one hour for the change to reflect in nOps.

You can select multiple resources and tag, then click on the + Add Migration Tag button to tag them in one go. For resources that belong to different accounts, nOps will present them grouped by account, and you can tag resources for one account at a time.


Troubleshooting

Matching Spend and Credit Numbers with AWS

What to do if numbers don’t match what see in AWS:

– Check dates of tagging vs Amazon

– Put in past dates for tagging when not tagged till later in AWS

– Vs Tag Explorer, which assumes the resource had the tag for the history of its life

AWS Logins for Generating Server ID and for Tagging Resources

What to do if you don’t have the permissions to tag your migration project resources:

  • Ask your AWS admin to grant you permissions.
  • Check if you have the permissions for the resource that you are trying to tag.

Data Explorer#

nOps Data Explorer#

Query your cloud resources directly from the nOps platform.

nOps provides an easy and automated method for you to query your cloud resources directly from the nOps platform with the help of nOps Data Explorer powered by GraphQL.

The nOps Data Explorer is divided into:

  • Query Explorer
  • Query Builder
  • Query Output
  • Data Explorer Actions

Query Explorer

In the nOps Data Explorer panel, nOps constantly refreshes the cloud resources so that you have access to the current state of your environment.

As you expand the resources, you’ll see that you can filter using a simple clickable interface that will build the query interactively — retrieve the exact configuration elements about your resources, and use simple SQL type logic to filter the results.

We’ve also mapped the related resources in the nOps relationship graph. As you expand, you’ll see additional data sources for related resources. This allows you to ask questions like “Find all of the EC2 instances in a specific VPC with Auto Scaling enabled”.

Add New Query/Subscription

With the Query selection, you can add multiple queries in the same request.

Like queries, subscriptions enable you to fetch data. Unlike queries, subscriptions are long-lasting operations that can change their result over time. They can maintain an active connection to your GraphQL server (most commonly via WebSocket), enabling the server to push updates to the subscription’s result.

Subscriptions are useful for notifying your client in real time about changes to back-end data, such as the creation of a new object or updates to an important field.

Query Builder

As you interact with the Data Explorer, you’ll see your query dynamically generated in the Query Builder tab. Once you become a GraphQL ninja, you can also edit directly in this interface.

If you write a query directly into the Query Builder, simply click the Prettify button to automatically format the query and mitigate any flaws in the indentation.

To run a query, click the Play button. The output of your query will be shown in the Query Output tab.

Query Variables

Variables simplify GraphQL queries and mutations by letting you pass data separately. A GraphQL request can be split into two sections: one for the query or mutation, and another for variables.

Request Headers

Use Request Headersto send custom headers with the query.

Query Output

The Query Output tab will present your query results. They’ll be presented in an easy-to-consume JSON format. As you add more filters to your query and select the exact resource attributes that you’d like to retrieve, your result set will give you access to the precise data that you’re searching for.

To see the query output in a formatted table format, click the Table View button.

If there are errors in the syntax of the query, the Query Output tab will show an error message.

Data Explorer Actions

Data Explorer actions support the entire exploring process. With Data Explorer actions you can:

  • Execute Query: After building your query you can click on the run query button to see results.
  • Prettify Query: Formats the queries that you manually created to a visually presentable form.
  • Toggle Explorer: Hide or show the Query Explorer to get more screen space for writing queries and reading the query output.
  • Save Query: Save your queries as Public or Private.
  • My Queries: Access saved queries.
  • History: See the history of all the queries you’ve made.
  • Table View: See the query output in a formatted table format.
  • Documentation Explorer: Access the GraphQL Schema documentation right here in the Data Explorer.

Save Query

With the Save Query button, you will be able to perform actions on your created queries. You can save your queries as Public and Private.

My Queries

My Queries will open a panel where you can access all the queries that you have saved and those shared by your colleagues. In addition, you’ll be able to favorite, archive, and delete to keep them organized.

Code Exporter

Once you’ve created a query that returns a meaningful set of resources –like looking for unencrypted S3 buckets– you’ll likely want to integrate with your CI/CD or monitoring system so that you can continuously monitor for matches. The Code Exporter will generate Python and Javascript code that can be easily integrated into your engineering systems.

Documentation Explorer

With the Documentation Explorer, you can access the GraphQL Schema documentation right here in the Data Explorer while creating your queries.

Sample Data Explorer Queries

Following are a few sample queries that you can use to get data for your own cloud resources using nOps Data Explorer:

NameDescriptionExample Query
IAM Users Created After “X” DateLists out all IAM Users created after whatever specific date you want. The query has 1/1/22 as the example.query MyQuery {
iam_users(where: {create_date: {_gt: “2022-01-01”}}) {
arn
create_date
user_id
user_name
}
}
All EC2s That Are Windows MachinesThis is a query that will show all VMs that are running Windows machines as the platform.query MyQuery {
ec2_instances(where: {platform: {_eq: “windows”}}) {
instance_id
instance_type
platform
tags
state
}
}
All Encrypted EC2 Snapshots and their Associated EC2This shows all snapshots that are encrypted and which EC2 machine these snapshots are associated with.query MyQuery {
ec2_snapshots(where: {encrypted: {_eq: true}}) {
description
encrypted
state
snapshot_id
ec2_volume {
ec2_instances {
instance_id
state
}
}
}
}
All CloudFormation Stacks That Failed to DeleteA query that shows all the CloudFormation stacks that failed to delete after being queued up for deletion.query MyQuery {
cloudformation_stacks(where: {stack_status: {_eq: “DELETE_FAILED”}}) {
role_arn
stack_id
stack_name
stack_status
stack_status_reason
}
}
All IAM Roles Created Between DatesThis query shows you all of the IAM Roles that were created between two date ranges. The date ranges can be edited to show different results.query MyQuery {
iam_roles(where: {create_date: {_lt: “2020-01-01”, _gt: “2019-01-01”}}) {
arn
role_name
create_date
description
}
}

ShareSave#

Getting Started with ShareSave#

ShareSave provides risk-free Auto-pilot EC2, RDS, ElasticCache, OpenSearch, and Redshift Reserved Instances Management.

ShareSave provides risk-free Auto-pilot EC2, RDS, ElasticCache, OpenSearch, and Redshift Reserved Instances Management.

ShareSave is ideal for organizations that are growing rapidly and are having difficulties keeping track of everything ranging from server side, CMS stack, API stack, analytics, to linear playout engine.

ShareSave is also ideal for organizations that, during their rapid expansion, over-buy reserved instances at an alarming rate to keep up with the demands and it is now difficult for them to wind down after everything settles. ShareSave can not only help during their expansion but also helps save a substantial percentage on monthly AWS costs with the help of ShareSave RI management, ShareSave Scheduler, and ShareSave Graviton programs.

ShareSave Automations

The following is the list of cost-saving automation under ShareSave:

  • Risk-Free Commitment
  • Graviton
  • Resource Scheduler
  • Auto Scaling Groups
  • Amazon Relational Database Service

Risk-Free Commitment

Real-time, risk-free, hands-free automatic life-cycle management of Amazon EC2 and RDS commitments.

The ShareSave AI engine collects Amazon CloudWatch and AWS CloudTrail logs and continuously monitors and analyzes infrastructure usage data points. It then automatically reacts in real time by purchasing RIs upon an increase in compute usage and selling RIs upon a decrease in compute usage. nOps continuously purchases and sells commitments on an hourly basis, depending on your infrastructure’s capacity changes.

ShareSave grabs the most lucrative discounts in the Amazon EC2 Reserved Instance Marketplace.

Graviton

Switching workloads to AWS Graviton-based instances that provide the best price-performance for workloads in Amazon EC2.

Resource Scheduler

nOps ShareSave Resource Scheduler makes it easy to pause resources during inactivity and leverages the Amazon EventBridge bus to deliver signals to resources to stop them during inactivity and restart them when they are most likely to be used automatically.

Auto Scaling Groups

A data-driven approach to recommending new autoscaling configurations and real-time optimization opportunities based on your actual usage:

  • Determine your group’s idle resources using statistical analysis based on the group’s utilization data.
  • Reconfigure your autoscaling group based on actual usage.
  • The most optimal personalized configuration based on your historical utilization data.
  • Keep only the amount of resources you need.

Low and no code, one-click automation built into every recommendation you see.

Amazon Relational Database Service

The RDS recommendations clearly describe the optimization approach you should take and shows the recommendations to implement.

nOps looks at the utilization metrics and determines time periods when RDS instances are running but are inactive. The insights that nOps gather from your utilization partners turn into scheduling recommendations that you can implement to immediately start savings and reducing your spend.

ShareSave Dashboard

The Sharesave Dashboard provides clear visibility of the potential savings that can be achieved using the nOps Platform. The ShareSave program is able to process 6 months’ worth of data.

nOps continually refines its savings recommendations and strives to take ShareSave to absolute perfection.

The ShareSave Dashboard consists of three sections:

  1. Savings Summary and Breakdown
  2. List of Opportunities
  3. Filter

Savings Summary and Breakdown

In the Savings Summary, you will find:

  • Lost Opportunities — Money you could have saved, if you had signed up for nOps ShareSave, in the last 60 days.
  • Estimated Total Saving — Estimated saving nOps ShareSave for the specified time period (MTD, 7 Days, 30 Days, 60 Days, 90 Days).
  • Estimated Total ShareSave Fee — What we will charge for saving your money.
  • Estimated NET Savings — What you Save after paying nOps.
  • Net Savings % — The percentage of costs that you saved vs on-demand
  • Estimated Annualized Net Savings — What you will save in a year with ShareSave (Estimated savings vs on-demand with ShareSave).

You can also select the following tabs within the Saving Summary and Breakdown section to see a graphical representation and breakdown of your savings with respect to date and opportunity type:

  • Total Savings
  • Total ShareSave Fee
  • Net Savings
  • Annualized Net Savings

List of Opportunities

The List of Opportunities section consists of the following list:

  • List of Risk-Free Commitments
  • List of Graviton
  • Resource Scheduler
  • Auto Scaling Groups
  • Amazon Relational Database Service

List of Risk-Free Commitments

Actions against the List of Risk-Free Commitments are automatic and are carried out by the nOps ShareSave AI.

In the List of Risk-Free Commitment, click on an opportunity name to expand the list and see the details including resource name, account name, region, previous configuration, suggested configuration, detection date, total savings, and action:

List of Graviton

In the List of Graviton, click on an opportunity name to expand the list and see the details including resource name, region, previous configuration, suggested new configuration, detection date, and total savings:

Resource Scheduler

In Resource Scheduler, click on an opportunity name to expand the list and see the details including instance type, account name, region, schedule name, new configuration, confidence level, total saving, and action:

Click on the Schedule button against an opportunity to create an automated schedule to turn the instance on and off with the help of EventBridge. To learn more about the Resource Scheduler, see Utilize nOps Resource Scheduler with EventBridge Integration to Reduce Costs Automatically.

Auto Scaling Groups

In the Auto Scaling Groups, click on an opportunity name to expand the list and see the details including instance type, account name, current configuration, recommended one-time configuration, recommended dynamic configuration, scheduling status, one-time configuration savings, dynamic configuration savings, and action:

Click on the Schedule button in the Action column against the opportunity for the Autoscaling Group that you want to schedule. This will open up a list with two options: Create One-Time Configuration and Create Dynamic Configuration. Select your desired configuration.

If you select Create One-Time Configuration, you will see the one-time configuration screen. Populate the fields and click Create to create the schedule:

If you select Create Dynamic Configuration, you will see the dynamic configuration screen. Populate the fields and click Create to create the schedule:

Amazon Relational Database Service

In the Amazon Relational Database Service, click on an opportunity name to expand the list and see the details including resource name, RDS type, instance size, account name, region, current schedule, recommended schedule, total savings, and action:

Click on any opportunity name to see the details of the resource and its usage pattern:

To schedule a recommendation, click the Schedule button against the recommendation. If you click the Schedule button, you will see two options:

  • Create New Schedule
  • Attach Existing Schedule

To create a new schedule, based on the nOps recommendation, click Create New Schedule, all fields will be prepopulated according to the recommendation. Simply click Create to start reducing your spend:

To attach an existing schedule to the RDS resource(s), click the Attach Existing Schedule button, select an existing schedule from the dropdown list, and click Attach.

Filters

Use the Filter section, to apply filters on the entire ShareSave dashboard based on the selected cloud accounts and opportunity types:

Prerequisites

You must have access to your AWS Master Payer Account ID. With this ID, we will be able to generate a $5 Market Place Private Offer (MPPO) and send it over to you. It’s very easy to accept the offer and onboard an account.

nOps recommends that you link your AWS accounts to nOps with Automatic Setup. If you are an advanced AWS user and have specific requirements, you can also link your account to nOps with Manual Setup, Multi-Account Setup with Terraform, and Multi-Account Setup with CloudFormation.

If you are an existing nOps customer, in order to get access to all the cost saving features of ShareSave, you might need to update the IAM permissions for nOps. To update the IAM permissions:

  1. Log into your nOps account.
  2. Click on your profile name in the top right corner.
  3. Navigate to the IAM Policy Update section. You will see a list of your AWS accounts associated with nOps.
  4. Click the Update on AWS button against your master payer account. Make sure that you are already logged in to your master payer account before you click the button.
  5. Click the Update on AWS button against all associated accounts. Make sure that you are already logged in to the respective AWS account before you click the button (optional).

ShareSave Configuration

In the scope of this document, we will go through the configuration of ShareSave – List of Risk-Free Commitments.

To learn about ShareSave – Resource Scheduler and its configuration, see Utilize nOps Resource Scheduler with EventBridge Integration to Reduce Costs Automatically. To learn about the ShareSave – List of Graviton and its configuration, see Graviton.

To configure ShareSave – List of Risk-free Commitment, follow these steps:

  1. Log in to your nOps environment and head over to the ShareSave dashboard:
  2. On the ShareSave dashboard, in the List of Opportunities section, click the Configure Risk Free Commitment button:
    1. If your account has already been configured, you won’t see the Configure Risk Free Commitment button. The button is only visible to new customers and existing customers who haven’t configured ShareSave yet.
    2. Risk Free Commitments are only available to customers that onboard a Master Payer Account. You will not see the button if you haven’t onboarded a Master Payer account.
  3. Once you click the Configure Risk Free Commitment button, the following pop-up will appear:
    1. If the pop-up does not appear, make sure that the pop-up isn’t being blocked by your browser.
    2. Before you click Proceed, make sure that you’re logged in to your AWS master account in the same browser.
  4. After you click Proceed, nOps will take you to your AWS console’s CloudFormation Quick create stack page —with all the required information pre-filled— in order to create the required Cost and Usage Report (CUR). Acknowledge and click Create stack:

ShareSave – List of Risk-Free Commitments configuration is now complete.


Note: It will take upto 24 hours for the data to populate in nOps.


Graviton Recommendations#

Under Construction

A group of highly intellectual Technical Writers are doing their best to keep up with equally intellectual Developers.

Developers outnumber us 30 to 1. We are doing our best. Give us some slack. Contact the nOps Support team in the meantime.

Settings#

Add Users to nOps#

Add a new user to nOps

Adding users to nOps

One can subscribe nOps from AWS Marketplace or using payment gateway integrated within nOps.

You can signup using link.

The first user who signup for nOps by default has the Admin rights for the nOps account.

This admin user can add additional users to the nOps account by following these steps below.

Go to Settings from top-right user avatar drop-down.

Click on the Team Members on the side bar. Then click on Invite New Member to add additional team members.

Then type the team member’s email id which you want to invite to nOps. You can select a role which is either “Member” or “Admin user”.

Once you add the email and click on Send Invite message below will be shown in the browser.

The user whose email id is added will receive an invite from nOps which will be like below.

The user can set up his account entering the details as per below screen

Add a User to a Partner Account#

How to add another user to your partner account

Adding a User to a Partner Account

Easily add new user to access the partner account.

Recommended for partner employees only or contractors.

Go to Settings>Partner User

Click +Invite New User

Enter their email and select the role the user should be and Invite

Add a Client to the Partner Account#

How to add a client to your partner account

Adding a Client on the Partner Account

To add a client to the partner account:

Go to Settings:

Select on Clients on the left.

On the top right click on +New Client

Now select how the client should be invited:

Create a new client – Add the clients name and then go to their account to finish setup. It will bring you to a page to add their AWS account to start retrieving the account data.

Add the AWS account. There is an option to add additional users to this account. Here you can add the client to the nOps account for their AWS account only.

Invite a client for well-architected assessment – This will send an email to the client to have them add their AWS Account for the well-architected assessment. The email body can be customized or use the default message.

AWS account data will take 2-4 hours to load into the nOps account.

Notification Center#

Using the Notification center for Cost Changes, nOps Rules, Security, SOC2 Readiness, HIPAA Readiness Report, and CIS Readiness Report

How to use the Notifications Center

The Notifications Center enables alerts of changes within your AWS accounts. The alerts will be sent to an email and/or Slack channel. Set alerts for the following; Cost Changes, nOps Rules, Security, SOC2 Readiness Report, HIPAA Readiness Report, CIS Readiness Report

On the main dashboard, click on the name of the logged-in user and click the Settings menu item to go to the settings page

On the settings page, click on the Notifications Center menu item on the left-hand side of the screen.

In the Notifications Center, you can configure different notifications based on Cost Changes, nOps Rules, Security, SOC2 Readiness Report, HIPAA Readiness Report, CIS Readiness Report

Example email for Cost Control:

Disable Notifications for a User#

How to disable Notification for a user

To delete a user from getting a notification to take the following steps

Log in to your nOps account

On the top left corner of the dashboard, where the name of the user is currently logged in; click on the arrow to reveal the drop-down menu. Click on the Settings menu item

This will lead to the nOps notifications page that shows a list of different notification configurations

Click the specific user you want to disable notification for a user from. For this example, we will be using the Cost Changes tab. Under Users who you want to Notify (optional)

Click the X next to the email that should be removed from receiving notifications. Next, go through each section to make sure they are not listed on the other tabs (Cost Changes, nOps Rules, Security Dashboard, SOC2 Readiness Report, HIPAA Readiness Report, CIS Readiness Report)

Click Update Preferences.

Change Password#

Change your nOps password

How to Change nOps Password

Navigate to the top left corner menu that displays the name of the user that is logged in. Click on the menu and click on Settings menu item

On the Settings page, navigate to the left-hand side and click on the Change Password menu item.

This will lead to the Change Password page

Enter your old password and New Password, and the new password again to confirm, then click the Change password button to save the new password

Recover a Forgotten Password#

Find out how to reset a forgotten password

How Recover a lost password

Visit the nOps Sign in page on apps.nops.io

On the Sign In page navigate to the Forgot Password link and click it.

This will lead to the Password Reset page. On the page, enter your nOps registered email in the box, and click the “Reset My Password” button to recover/reset your password.

Switch Between Different AWS Accounts#

Learn how to toggle between AWS Accounts

How to switch between different AWS accounts

nOps is designed to allow the integration of multiple AWS accounts. This makes nOps more robust to help analyze more than one AWS account. This analysis is done in one account at a time and we will show how to switch between different AWS accounts:

Navigate to the top left corner of the screen where the name of the user currently is logged into. Click on the name of the currently logged-in user. On the drop-down menu, there is a menu item called “Switch Account”

Move the mouse over to the “Switch Account” option to view a list of AWS accounts that you can switch to

Click on the specific account you wish to switch to based on the list of AWS accounts displayed.

This will switch to the selected account.

Data Purge#

How nOps remove data from an account

How nOps purges data

The data purge policy describes how we purge data after you delete an account and how we remove data.

Delete account from nOps

  • A client admin will be able to delete AWS account on nOps settings page.
  • When the admin clicks on delete account in nOps, the account will be scheduled for deletion within 30 days.

When the account deletion queue task is executed:

  • Deletes all data fetched from AWS API.
  • Deletes all cost data ingested to nOps.
  • Keeps user login (email address and password which is used to login to nOps system)

Marketplace subscription:

  • Delete AWS account in nOps wouldn’t change the status of Marketplace Subscription.
  • Customer should go to AWS console in order to unsubscribe nOps https://console.aws.amazon.com/marketplace/home

Delete client from nOps partner portal

  • A partner admin can delete their client in the nOps partner portal client settings page.
  • When the admin clicks on delete client in nOps, the client will be scheduled for deletion within 30 days.

When the client deletion queue task is executed, it would:

  • Deletes all data fetched from AWS API.
  • Deletes all cost data ingested to nOps.
  • Deletes client from the list of clients.
  • Keeps the invoice generated for that client.
  • Delete all Personally Identifiable Information associated with the client, including e-mail addresses, user names, and any other PII.:

For more information on our Privacy Policy please visit here: nOps Privacy Policy

Billing/Invoice#

How to add new Customer Billing type MSP#

These are the steps to add a new customer billing type:

Log in to the nOps Partner Dashboard here

Click on the Settings Icon on the top-right corner of the screen.

On the Settings page, there is a side menu with options such as Clients, Partner User, Company Profile, Billing Accounts.

Click on Clients

On the Partner Clients page, click on the New Client button on the top-right corner of the page.

On the dialog box, you can enter the client name in the text box and select the billing options, which are two options Full Access or Well-Architected Review Access . Select the specific billing option you would prefer to use for the client.

Click the Create Client button to create the client with the specific billing option.

How to Invoice for Clients - MSP#

Partner Dashboard for Invoicing clients

How to Invoice for Clients

nOps Partner Dashboard has an intuitive interface that makes it easy to generate invoices for clients. Here are the simple steps to do that

Move the mouse to the top bar that contains the name of the logged-in user. On the drop-down that displays and click on the Partner Dashboard menu item.

This will lead us to the Partner Dashboard home page.

On the Partner Dashboard, go to the top menu bar and click on the Billing menu item.

Select Customer Invoicing, this will show the Customers List page. Each customer item has an arrow link, click on the arrow link.

Click on the arrow to see all the invoices for the customer.

This will open up the different invoices for the customer.

On the customer invoice page, go to the Action column and click the print icon on the specific invoice you wish to download or view the detail of the invoice.

This will download the PDF version of the invoice.

How to configure Chargebacks in Chargeback Center#

How to use Chargeback Center to shift-left cost optimization and make your users cost accountable for the cloud they use

Chargeback Center

Overview:

Using the Chargeback Center brings visibility and accountability to AWS cloud cost. tie cloud resources to teams and attach chargeback features to them. The larger the number of AWS accounts you have the bigger this problem becomes. This also gives teams a handle on their own budgets and uncontrolled cloud budget spend.

The Chargeback Center does not create invoices. It will help visualize spend for business units, teams, and billing.

How to use Chargeback Center:

Go to Cost Control>Chargeback Center

The Chargeback Center is your dashboard for the chargeback “perspectives” that you can create. You can create as many buckets as you wish to represent different perspectives which can be multiple workloads for different teams.

Creating a new Chargeback by clicking on Create New Chargeback then just complete the simple pop-up form.

On the pop-up fill out the following fields:

Chargeback Name – Enter a name for the Chargeback that makes sense to your organization. If it’s a team, project or person then use a combination of identifiers like “Developer-UX-Project5-Staging”

Chargeback Type:

  • Predefined – Use the default labels provided; Business Unit, Team, or Billing. These are just text labels that make sense to you and don’t modify how nOps works.
  • Custom – Create your own label if the Predefined Labels don’t cover your use case.

Set Monthly Budget Limit – Create a budget for the Chargeback. And, select if an email should be sent if the budget is over the spend.

Filters:

  • AWS Managed Services – The AWS services that nOps will include in your workload. This defaults to All.
  • VPC – The VPCs that contain the resources that nOps will include in your workload. This defaults to All.
  • Tags – Select tags to be assigned to the resources you want to include, e.g., “ApplicationA.”

Click Save

There’s an option to Favorite this Chargeback for easier filtering by clicking the star in the top right.

Click See chargeback details

  • You will see the spend and difference over the months broken down on this page.
  • Download the chargeback as an invoice.
  • Track if the chargeback has been; sent, paid, or unpaid.

Other things you can do with nOps chargeback:

  • You can go from AWS Region and accounts down to the granularity of AWS tags to allocate resources to the chargeback bucket.
  • You get a dashboard for that chargeback and you get notifications.
  • You can set the period to monthly, quarterly, or annually.
  • You can see underspend and overspend.
  • You can see the history and mark overspend as paid or not.

How to View the Cost of a Kubernetes Service (Container Cost)#

Granular costs to your EKS, pods, and services.

How to view the cost of Kubernetes Pod and Service (Container Cost)

nOps is designed to help give granular costs to your EKS pods, and services. To view the cost of your EKS resources, use the following steps:

  1. Click Cost Control on the top menu and select Container Cost from the list.
  2. The dashboard contains a graph, that shows daily container cost. Scroll down to see a list of EKS clusters that have been created within the AWS account. This list shows the cost of each EKS cluster per day:
  3. The EKS clusters are shown in the EKS Clusters List. From here you can view detailed costs for components in this Kubernetes cluster. Such as the cost for each Worker Node (EC2 Instance), Services, and Pods. Click on an EKS cluster for more details. From our screenshot, we will click on ys-addon-po-s cluster.
  4. This will provide details about the Services, Nodes (EC2 Instances), and Pods within the cluster:

How to use this data

The instances in the cluster are listed with a status. If you see instances in the cluster that are inactive you may want to consider deleting them if they are unused in order to save costs. Any data contained in these clusters can be archived or moved to another resource/resource type.

Partner Consolidated Billing Setup Guide#

This guide is for AWS Consulting Partners that are using nOps to manage consolidated billing and invoicing for multiple clients.

Consolidated Billing for AWS Partners

AWS Consulting Partners can grow their AWS practice by adding a service to manage billing and invoicing on behalf of their AWS customer clients.

For example, an AWS customer that already works with an AWS partner to benefit from their technical expertise is often willing to extend this to benefit from help understanding and managing their billing. As their AWS footprint grows, so does the complexity of their billing and this can become a large, regular monthly effort to keep on top of costs.

An AWS partner that uses nOps can introduce the benefits of consolidated billing to ease the burden of billing and work together to further reduce their cloud bill with intelligence management of costs and discounts.

https://www.loom.com/embed/123b7e80918b4ac7a0960f7f5ebdb4f1

This guide has been written for AWS partners so they can understand the simple end-to-end process of adding new clients to their nOps system.

  • Introduction
    • Who is this guide for?
    • What you’ll find in this document
      • Why this document is modular and not a monolith
      • Style
      • Terminology
  • Step 1 – Setting up your AWS Master Payer Account
    • Suggested Reading
    • How to set up a new AWS Master Payer Account
  • Step 2 – Set up and link nOps Accounts
    • Suggested Reading
    • How to setup nOps accounts
  • Step 3 – Configuring nOps Billing
    • Understanding credit and discount sharing
    • How to create a Billing Import
  • Step 4 – Monthly invoices
    • Key points
  • Conclusion
    • Feedback and Support
    • Further reading

Introduction

There are four steps in this guide to delivering nOps-managed consolidated billing for an AWS Consulting Partner:

Step 1Configure AWS
Step 2Setup nOps
Step 3Setup Billing
Step 4Get Invoices

Who is this guide for?

This guide is for Cloud Solution Architects or FinOps Practitioners at AWS Consulting Partners that are using nOps to manage consolidated billing and invoicing for multiple clients.

There are two main scenarios for setting up consolidated billing and both are addressed in this document:

  1. A “new” AWS partner that is doing consolidated billing for the first time. They have multiple clients with multiple accounts, but they are not using AWS Organizations yet. This is addressed specifically in Step 1, then the rest of the Steps apply.
  2. An “existing” AWS partner that is doing consolidated billing with AWS Organizations but they want to move to nOps to benefit from the extra cost management features and invoicing.

The two scenarios are only different because of their starting point. This document delivers the same outcome in both scenarios: nOps providing monthly invoices for consolidated billing.

What you’ll find in this document

To address the three scenarios in a modular, accessible fashion we have arranged the document as follows:

Step 1AWS Organization and Master Payer Account
Step 2Linking nOps Accounts to AWS Accounts
Step 3Configuring Billing Imports and Exports
Step 4Generating Invoices for your customers

AWS Consolidated Billing is a large and complex topic and to make it more digestible in this document we have made some design designs to help:

  • Use a modular format instead of a monolith document.
  • Use font styles to clarify product features and the vendor.
  • Use consistent terminology to avoid confusion.

Why this document is modular and not a monolith

To keep this document to a reasonable size we have decided to modularize the content:

  • This document covers the entire “left-to-right” process and only goes deep enough in each step to describe the actions required.
  • For details on each step there are linked documents that have screenshots and deeper technical guidance.

Style

There is an intentional capitalization and coloration style in force to help clarify specific features in AWS and nOps:

  • nOps product components will be capitalized and coloured blue, for example as Partner Dashboard.
  • AWS product components will be capitalized and coloured orange such as AWS Organization.

Terminology

AWS Organizations and Consolidated Billing can be confusing and much of that confusion arises from too many new terminology / taxonomy .

The following terminology is used in this document:

  • Partner MPA is the Partner’s own AWS Master Payer Account all the Partner’s client AWS account bills roll-up to.
  • Client Member Account is the Partner’s Client’s AWS account that rolls-up it’s bills to the Partner MPA
  • Client MPA is a special case when a Partner’s Client has their own separate Master Payer Account.
  • AWS Organization is the AWS account management service that manages multiple AWS accounts under one consolidated organization.
  • Partner Dashboard is the Partner-level screen to access Partner-only functions of nOps.
  • Client Dashboard is the Client-level screen to access Client-only functions of nOps.

Step 1 – Setting up your AWS Master Payer Account

This step applies to Scenario One only, “AWS partners that are creating a new consolidated billing system, perhaps as part of a new service portfolio offering to resell AWS and/or manage customer invoicing.”

It’s recommended that your AWS Master Payer Account (“MPA”) does not run any AWS services other than those required to be responsible for AWS Organizations and billing. For this reason, and where no existing MPA is in place, it is cleaner to create a new AWS account from scratch as the MPA.

Suggested Reading

We recommend becoming familiar with the best practices for setting up a new AWS account, using AWS Organizations and understanding consolidated billing and Master Payer Accounts.

  • nOps documentation
    • Setting up Consolidated Billing
  • AWS documentation
    • What are AWS Cost and Usage Reports?

How to set up a new AWS Master Payer Account

The steps below are intended to explain the process at a high-level. For a more detailed guide please refer to nOps Documentation – Setting up Consolidated Billing.

1. Create a new AWS account
Create a new AWS account to be designated as your MPA and follow AWS account best practices for security. Restrict who can access this MPA to only the people who will be responsible for AWS Organizations and Billing. Do not let developers or anyone other users have access.
2. Set up an S3 bucket for your AWS billing files
Follow this guide to set up an AWS S3 bucket in your Master Payer Account for your Cost and Usage Tracking Report files:Setting up an Amazon S3 bucket for Cost and Usage ReportsJust use the defaults of no public access.Take a note of the friendly name of your bucket, such as “acmecorp-mpa-cur” because the nOps Setup step will need this.
3. Configure Billing
Go to the Billing page of the Master Payer Account.Paste your S3 bucket name into the Receive Billing Reports.If you want to limit AWS credits only to the account they are applied to, select Disable credit sharing.Configure RI discount sharing.
4. Create a new AWS Organization
Consider a structure for your Organizational Units and other AWS Organizations best practices.An OU for the partner accounts – put your MPA in here and other internal “service accounts” you might have like a Security, Backup, Operations and DevOps.An umbrella OU for customer accounts, then create a sub-OU under there for each customer.Consider Service Control policies for managed client accounts, for example to not allow them to leave arbitrarily without following an exit process where they can settle their account before you remove them from your AWS Organization.
5. Add your existing AWS accounts
Use the AWS Organizations feature to Invite existing AWS accounts into your new AWS Organization.

A completed AWS Organization with an MPA might look like this:

Now you have an AWS Organization with one Master Payer Account and one or more Member Accounts organized into Organizational Units.

Step 2 – Set up and link nOps Accounts

The goal of this step is to configure nOps in preparation of the final step to link nOps Client Accounts with your AWS Master Payer Account and your AWS Member Accounts.

Suggested Reading

We recommend becoming familiar with the best practices for setting up a new AWS account, using AWS Organizations and understanding consolidated billing and Master Payer Accounts.

  • nOps documentation
    • Setting up Consolidated Billing

How to setup nOps accounts

Thanks to the nOps account setup wizard, you can create nOps accounts and link them to existing AWS accounts in one step.

1. Make sure you have an nOps Partner Account and can access the Partner Dashboard
2. Initialize your nOps environment by following the Getting Started Guide.
Creating the initial settingsConfigure AWS using the Wizard
3. Create an nOps Client Account for your Partner AWS Accounts
Create a Client Account inside nOps that will represent your Partner MPA and other partner accounts such as a shared Operations or Security account — all of those partner AWS accounts in AWS Organizations under the Partner Organizational Unit.Create the Client Account “Partner-<name>-MPA”On the Partner Client screen, find your new nOps account and click Go to account.
4. Complete the Account Setup Wizard steps
Follow the wizard on-screen instructions.
5. Create an nOps Client Account for each Client AWS Account
In your AWS Organization you should have:Customer OU. Under that OU there should be an individual OU for each Customer.Under each Customer OU is where their AWS Accounts are organized.In this step, create an nOps client for each Customer and link to one of their AWS accounts. If the customer has multiple AWS accounts, add those later via the Add AWS Account feature in their Client Dashboard.

Now that nOps accounts have been created and linked to AWS accounts you are ready to configure AWS billing.

Step 3 – Configuring nOps Billing

By configuring nOps billing you are telling nOps about the relationship between accounts and how to process credits and discounts as part of the billing process.

This is of special importance to AWS partners that have multiple clients and may even have different billing relationships for each client.

The simple case is when a partner has one MPA and multiple clients who all have the same billing configuration with respect to credits and discounts.

❗ If a customer’s individual AWS account is set to for Reserved Instance sharing but the AWS partner Master Payer Account is set to unshared, then the customer’s setting will be over-ridden to be unshared and they can’t change it. This setting is now AWS Organization-wide and set at the Master Payer Account. This may cause a change in the customer’s bill so the AWS partner needs to meet with the customer to set expectations about how consolidated billing might impact their bill. The good news is that nOps Billing Exports can override the MPA Billing settings for individual client accounts. This means the customer could be reset to RI sharing which is different from the MPA setting. Why would a partner wish to have different settings inside an AWS Org? It is likely only to be a special case. Previously the only way to do this was to have another AWS MPA with different settings, which means the partner needs to manage multiple MPAs. With nOps the partner can have a single MPA with multiple settings.

Understanding credit and discount sharing

Inside the AWS account settings for Billing, you can toggle the sharing of Credits and Discounts which means:

Sharing enabledAny unused credit or discount will be shared across the AWS Org.Any unused credit available for sharing across the AWS org will apply.
Sharing disabledAny unused credit or discount will be lost.Any unused credit available for sharing across the org will not apply.

Someone might disable sharing of credits and discounts to have a predictable, though more expensive, cloud bill each month.

What should I set as the AWS MPA default for sharing costs and discounts?

When their AWS account is joined to the AWS organization then it will inherit the blended cost and sharing of credits and discounts set at the Master Payer Account level.

The best configuration for nOps to assume control of costs and discounts is to configure the AWS Master Payer Account as follows:

RI / Savings Plan SharingEnablednOps will let you override this on a per-account basis from within nOps, but this default setting of enabled is most common and beneficial for overall costs, meaning isolated/unshared accounts are an exception.
Credit sharingDisabledCredits are intentionally applied to an individual account. If Credit Sharing is enabled then unused credits will be applied across the whole AWS Organization which is often not the intention in Partner account management. Often the client secures AWS credit and intentionally applies it to one AWS account and does not intend to share it. They never intend to share it across Partner AWS accounts.

To learn more about how AWS manages credits, read AWS Credits.

How to create a Billing Import

First you need to create a tell nOps which is your Partner MPA to import the consolidated billing from. We do this by linking it as a Billing Import.

Go to Partner Dashboard → Settings → Billing Imports

Once you click Add Billing Export, the configuration screen will appear:

How to complete each field:

Billing Import NameCan be anything, but we recommend using your Partner Name plus “MPA” “Acme Corp MPA”
Choose ClientThis is the nOps account for your Partner AWS accounts that you configured earlier and includes your MPA.
Choose an AccountThis is your Partner AWS MPA.
nOps will start ingesting and processing the AWS Cost and Usage Report (“CUR”) for your Partner MPA.

How to create a Billing Export

A Billing Export is a configuration that creates a client AWS bill from your Partner MPA account (the Billing Import):

If you’ve configured a Billing Export for your Partner MPA then nOps is already processing data when it’s available from AWS.

The next step towards producing client invoices is to link the Partner MPA to the client and tell it how to process costs and discounts for that client by going to Partner Dashboard → Settings → Billing Exports.

NOTE This will override the settings in the AWS Master Payer Account, which are assumed to be set to Sharing enabled for RIs and Savings Plans.

How to complete each Billing Export field:

Billing Import NameThis is your Billing Import you created in the previous step, for example “Acme Corp MPA”
Choose ClientThis is the nOps account that maps to a specific client, under which will be one or more AWS accounts.
Choose an AccountThis is your Clients AWS account to report on..
Cost TypeIf you don’t want the customer to benefit from a lower blended rate because perhaps the customer specifically wants an unblended rate, then select Unblended. If you select Blended then the customer’s monthly invoice will fluctuate for reasons outside of their usage and this expectation should be set.
Reserved Instances Cost AllocationSharing means that RI investments will be applied to the purchasing account first and then any unused will be shared across the AWS organization — including outside the customers AWS Orgs OU. It also means they benefit from unused RIs in other organizations in the OU. Unshared means unused RIs are lost and this AWS account won’t benefit from spare RIs in the AWS Org.
Savings Plan Cost AllocationThe same concept as for RIs above.

Step 4 – Monthly invoices

As part of the AWS monthly billing cycle, nOps will automatically generate your bills for those configured Billing Exports and make them available under the Partner Dashboard Billing option

Key points

  1. You will only get invoices for Billing Exports you have set up.
  2. If you have AWS accounts that you have not configured a Billing Export for, then that will be shown on the Partner invoice.
  3. If you haven’t explained how consolidated billing works, how blended costs with credit and discount sharing can influence a customer’s invoice, then you might be queried.
  4. You clients can’t see their own invoices via their own Client Dashboard. You need to download and share the PDF with them.
  5. If your customers still have access to the Billing Dashboard in AWS and if they don’t understand how consolidated billing works, then expect queries — avoid these with our guide (how to prepare….)

Conclusion

This guide should have helped you to:

Step 1 – Setup AWSConfigure your AWS Organizations, consolidated billing with master payer and member accounts in Organizational Units.
Step 2 – Setup nOpsConfigure nOps and add Partner and Client accounts.
Step 3 – Setup BillingUse the Billing Imports and Billing Exports to automate client invoices from the Master Payer Account.
Step 4 – Access InvoicesWhere to find invoices and how to share with clients.

Feedback and Support

We welcome feedback on this document to help us improve.

Please email help@nops.io or use live chat in the nOps application.

Further reading

A number of documents and articles were linked to in this document and we collect them all together here for ease of reference:

  • nOps Documentation
    • Setting up Consolidated Billing
    • Creating the initial settings
    • Configure AWS using the Wizard
  • AWS Documentation
    • What are AWS Cost and Usage Reports?
    • AWS account best practices for security
    • Setting up an Amazon S3 bucket for Cost and Usage Reports
    • Service Control policies

Integrations#

EventBridge Integration#

Automate Savings with nOps EventBridge Integration

EventBridge Integration makes it easier for nOps to automate workflows in the client’s environment.

Amazon EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated Software-as-a-Service (SaaS) applications, and AWS services.

With EventBridge integration, nOps will be able to:

  • Automate events based on nOps rules.
  • Trigger automation to reduce the size of underutilized EC2 instances.
  • Purchase or exchange RI automatically risk-free, if RI utilization is not optimized. To see the nOps list of risk-free commitments, go to ShareSave Dashboard > List of Opportunities > List of Risk-Free Commitments.
  • Turn on and off the EC2 and RDS instances in a group with the Resource Scheduler.
  • Trigger messages in multiple services other than the event messages.

To add nOps as an EventBridge partner, go to AWS > Amazon EventBridge > Partner event sources, and select nOps as one of the options to listen. Click on Setup and configure the options.

Once the EventBridge integration is set up, you can create event sources directly from the nOps application and deploy them to any account and region that you’ve connected to nOps.

To integrate your AWS EventBridge with nOps, log in to nOps and click User Avatar > Organization Settings > Integrations > EventBridge. In the Event Bridge tab click + Create EventBridge.

On the Create New Event Bridge page:

  1. Create a name for this EventBridge.
  2. Select the AWS account you want to deploy the EventBridge into. In the AWS accounts list, you will only see the accounts that you onboard into nOps.
  3. Select the region you want to deploy the EventBridge into.
  4. Click Create.

When you click Create, nOps will deploy the EventBridge into your selected AWS account and region by creating an event bus in the selected AWS account.

To see the event bus that nOps just created, go to AWS > Amazon EventBridge > Event buses > Custom event bus.

The next step is to add an EventBridge target into Webhooks.

Adding EventBridge Target into nOps Webhooks

nOps has a Webhook for almost every cost optimization rule in the nOps environment.

When you create a Webhook, anytime an associated event is fired a message will be sent to either the endpoint you define or the EventBridge that you create.

To add an EventBridge target into nOps Webhooks, go to User Avatar > Organization Settings > Integrations > Outgoing Webhooks. In the Outgoing Webhooks tab, click + Create New Webhook

In the Create New Webhook page:

  1. Create a name for the Webhook.
  2. Select an Event. Notice that the Event option has two fields, in the first field you need to select the event category, and in the second you need to select the actual event.
  3. Select an Endpoint, it can be either a simple Endpoint (Target URL) or an EventBridge. In the scope of this documentation, we are only going to select the EventBridge option. Selecting this option will send the Webhook to your EventBridge.
  4. Select an EventBridge from the list. The list consists of all EventBridges that you create in nOps > Organization Settings > Integrations > EventBridge tab.
  5. Click Save.

Launch Stack

When you create an EventBridge in nOps, nOps will automatically create an event bus in your selected AWS account. To see the event bus that nOps automatically created, go to AWS > Amazon EventBridge > Event buses > Custom event bus.

Once, you have configured the webhook, come back to the nOps > Organization Settings > Integrations > EventBridge tab and click Launch Stack against the EventBridge that you created:


Note: Before you click Launch Stack, make sure that you are logged into the AWS account that you used for the EventBridge. You also need to make sure that the account region is the same as the one you specified while creating the EventBridge.


When you click Launch Stack, you will be redirected to AWS > CloudFormation > Stacks > Create stack. All fields on the page will be prefilled.

You can follow the link highlighted above to see the CloudFormation template. To see the CloudFormation template right now, see Scheduler CloudFormation Template.

Check the acknowledgment box, and click Create stack. The EventBridge is now ready and configured.

Add Key in KMS

Allow Scheduler Lambda Function to use encrypted EBS with KMS to get automatic Stack creation for EventBridge with a single click. To enable, go to nOps > Organization Settings > Integrations > EventBridge and click Add key in KMS. You will be redirected to the Create stack page in AWS:


Note: To check the template, go to the link highlighted above.


Simply enter your KmsArn, acknowledge, and click Create stack.

Jira Integration#

Integrate Jira in nOps to navigate issues

Adding Jira Integration

Integrating Jira into your cloud optimization workflow has never been easier!

With nOps new OAuth 2.0 Jira integration, nOps has simplified the process to attach your Jira instance. This allows you to quickly create and assign task for issues discovered by nOps. To integrate Jira using a simple sign up process you must allow access to your Atlassian account. Once completed you can create and assign a Jira ticket in your company’s Jira environment without having to leave nOps.

Key features of the Integration

Among the key features of the integration are that:

  • All users are able to add Jira tickets and you do not need to be a Jira or nOps Admin user to do so. For your convenience we have described 2 ways in which you can integrate Jira.
  • The integration is simple and does not require extensive configuration, however you must provide a link to your companies Atlassian Jira account in order to connect it.
  • Once the integration is complete you can create a Jira ticket through any Resource Details dialogs. These are available from any of the following areas:
    • Spot Advisor
    • Tag Explorer
    • Resource Rightsizing
    • nOps Rules
    • AWS Inventory
    • SOC2, HIPAA, CIS Readiness Reports
    • Workloads

Note: nOps currently integrates a single Jira account per nOps user.

To Add Jira Integration as an Admin User

  1. Login to your nOps account.
  2. If you are a:
    • Partner Admin select Manage Clients from the profile menu, select a Client from the list, then select Organization Settings from the Profile menu
    • Client Admin select Organization Settings from the Profile menu.
  3. From the left pane select Jira Cloud and follow the prompts to accept the integration.
  4. Once the integration is set up, refresh your instance in order to update your settings.
  5. Return to the Settings Pane and click Jira Cloud to display an option to select a project from the drop-down.
  6. If any associated Jira tickets are available for the selected project you will be able to view them on a status board.
  7. Click a ticket to open it.

To Add Jira Integration as a User or as a Member

  1. Login to your nOps account.Navigate to and open any area that contains a resource detail, workload WAFR, or Lens assessment.Example: Navigate to a tab, such as the Security Dashboard and select a rule name that contains a violation by clicking the arrow icon on the right. Click on Resource Details for any of the items in the list. Click Add Jira Ticket icon on the top right of the Details dialog OR
  2. Click the Workloads tab to list the workloads and select a workload from the list.Turn on a Lens or Compliance Framework toggle and click on Assessment. At the Assessments page, right-click on a dot menu and select Create Jira Ticket.
  3. Follow the onscreen prompts to accept the integration.
  4. When prompted to select a site, select the Jira link for your environment from the drop-down.
  5. Once the integration is completed, refresh your browser to update your settings.The next time you click Create Jira Ticket you can begin to create and attach Jira tickets into your nOps workflow.

And thats it!!

What if I have already configured Jira integration in my nOps environment?

If you have an existing configuration you may need to follow the on-screen prompts when opening a ticket, to take advantage of the new integration.

PagerDuty Integration#

PagerDuty Integration with nOps

You can use your PagerDuty service to receive the security audit trail related to security best practices from nOps. These security best practices are implemented using different nOps rules. Whenever there is a change in violations, a security audit trail is generated. Follow the steps below to integrate PagerDuty with nOps now.

Steps to Integrate PagerDuty.

Create a new service in PagerDuty

Go to Configuration → Services → New Service

From the new created service, click on New Integration

This new integration will help in generating a new API key that can be added to the set security audit trail

Login to nOps and go to logged-in name to show the Settings icon. On the Settings Page, click on Integrations. In the Integrations page, click PagerDuty Integration. Enter the PagerDuty Integration Key.

Slack Integration#

Integrate with Slack to send support issues to nOps

Slack Integration with nOps

nOps supports sending alerts directly to slack.

Configuring the integration

Go to Settings

Click on “Integrations” in left menu and select “Slack Integration”.

Enter Webhook URL

Enter “Webhook name” , its can be any keyword you would like to show on slack channels when a notification arrives.

Click this link to edit Configurations: https://nops.slack.com/apps/A0F7XDUAZ-incoming-webhooks

Click on “Save” button and you will be done with integrating Slack.

Sending alerts to Slack

On the Settings page. Click on the Notification Center.

The Slack Channel can have a unique name depending on the Notification channel that is created; Cost Charges, nOps Rules, Security Dashboard, SOC2 Readiness Report, HIPAA Readiness Report, and CIS Readiness Report.

SSO Integration#

How to Integrate SSO in nOps

Running a secure cloud system is very important. With the new nOps SSO feature, integrating SSO from your favorite SAML 2.0 provider is a smooth and easy process. You can currently integrate Okta, OneLogin, Azure Active Directory (Azure AD) amongst others.

Getting Started

To incorporate SSO in nOps, you need to configure the SSO for your SAML provider. To do that, you first need to get some credentials from your nOps dashboard.

Your nOps Credentials

  1. To access your nOps SSO credentials, navigate to your SSO Settings Page. Go to:
    Organizational Settings > SSO if you’re using the client portal
    or Partner Settings > SSO for the partner portal.
    You will be prompted to enable SSO for access to the SSO Settings page.
  2. Copy the Assertion Consumer Service and Entity ID values on the SSO Settings page and paste them into your SAML provider’s SSO configuration settings.
  3. Next you need to map some defined attributes. This should be done using the exact values as described. These attributes are called “Parameters” in OneLogin.
Map this Attribute valueTo this Attribute name
EmailUser.Email
First NameUser.FirstName
Last NameUser.LastName
GroupsUser.groups

When you are done, you will be provided setup instructions which you will then use to configure SSO on nOps.

To learn how to configure SSO, see the configuration documentation.

Webhook Integrations#

Configuring Webhook Integrations for 3rd party apps

Use nOps Webhooks integrations to notify you when a specific event, such as a violation, occurs in your AWS cloud environment.

You must be an Admin user to set up a Webhook.

nOps Webhooks are easy to configure, use HTTP protocols, and are extensible. They support the standard GET, POST, PUT, PATCH, and DELETE operators.

This article contains the following topics:

Before you Begin

Configure a Webhook

Edit, or Delete a Webhook

Before you begin

  1. Login to nOps using an Admin role
  2. From the Profile menu select Organization Settings to go to the Setting pane.If you are a partner Admin, From the Profile menu select Manage Clients, select a client from the list, click the dot menu and select Go To This Account > then select Organization Settings from the Profile menu.
  3. From the Settings pane click Integrations.
  4. Select the Outgoing Webhooks tab

What to know before you create an Outgoing Webhook

  • Create outgoing webhooks to notify you about violations as they occur in your AWS environment.
  • Use the +Create Webhook to create as many webhooks as required
    IMPORTANT: All fields marked with an asterisk are required. The webhook cannot be saved without this information.
  • We currently support the following Event Type and Request Methods. Note: The dialog choices change when a different Request Method is selected.
For Event TypeSelect this Request MethodTriggered
New Rule ViolationPOSTAnytime a new rule violation occurs.
Reserved Instance SurplusPOSTWhen a Reserved Instance surplus is detected
Reserved Instance DeficitPOSTWhen a Reserved Instance deficit is detected

Configure a Webhook

  1. From the Integrations page select the Outgoing Webhooks tab.
  2. Click the +Create Webhook button.
  3. At the Create New Webhook dialog enter a Name for the webhook.
  4. The select an Event Type from the available options. See the table above this section for a complete list.
  5. The Request Method field contains GET, POST, PUT, PATCH and DELETE operators.
    To send information about an event, select POST.
  6. Enter the End Point (Target URL) information. An endpoint is the URL to which the notification will be sent. Most customers typically post to a specific Slack channel. You will need to get this URL from the target application. For example, for Slack see the following link on how to get started with incoming Slack Webhooks.
  7. Once the webhook is created, the target application provides information about the Header key and value pair. For example, for Slack the header and value pair are:Content-type and application/jsonYou can add multiple Headers if required.
  8. The Substitutions table displays attributes that you can use in the Request Body for information about the event. Your choices are:
    {{time}} The time the event was detected{{name}} The name of the rule that was violated {{description}} Details about the event or rule.
  9. The Request Body contains an automatic JSON validator to check the code you enter.
    Add or edit the text attribute to send a specific message as seen in the following example:{
    text: ‘nOps detected another New rule violation. The rule {{name}} was violated at: {{time}}. Description {{description}}.’
    }
    For a Reserved Instance Surplus the message could be as follows:{ text: ‘nOps detected a Reserved instance Surplus in your account at: {{time}}. Description {{description}}.’}
  10. Click Save to save the webhook.

Edit, or Delete a Webhook

Once a webhook is created, you can edit it, or delete it from the list on the Outgoing Webhooks tab.

  • Click the Edit icon to edit a webhook. For example, if you need to change the endpoint location
  • Delete the webhook using the Delete/Trash icon.
    Note: Once a webhook is deleted the information cannot be recovered.

Solution Docs#

Evaluating the Cost Impact of a Changeset#

It’s Friday 4 pm and you need to deploy a new Terraform configuration to AWS before you go home. You are required to verify that the new EC2 changes (create/update/delete) including RDS, do not go over budget. The last time you didn’t verify the changes and ended up costing the company $100,000 over the weekend because you accidentally deployed a fleet of high memory intense instances.

With nOps, this situation is now completely avoidable, you can now isolate the changes to the developer locally and see the cost changes before you deploy anything with a simple Git pull request (Github Action). All you need to do is use the CLI/SDK in your DevOps workflow.

In this solution document we will explore:

  1. nOps CLI and GitHub Actions (full automation)
  2. nOps SDK and CLI (custom automation)
  3. nOps SDK’s Pricing module and Cloud Infrastructure module (manual)

You can use any of the above methods to investigate cost impacts for infrastructure changes made using Terraform to an AWS cloud account.

To learn more about the nOps SDK, see nOps SDK Documentation.

Value of Automated Cost-Impact Evaluation

Lower Daily Costs

Many small companies (and yes, even large enterprises) struggle to control costs while embracing cloud compute capacity. Costs are often focused on “keeping the engine running” and few resources are focused on managing capacity efficiently to lower daily costs of running a cloud infrastructure.

Optimize Resources and Budget Across the Organization

You spin up resources, use them for a bit and leave them around just in case you need them. And then you forget about these resources and don’t touch them for six weeks. This situation is common but can disrupt your organizational budget. Well! With the help of nOps Pricing module, this will never happen again – the Pricing module will help you identify the cost impacts of resource changes so that you can manage them effectively up front.

Awareness of cost impacts enables early discussions with stakeholders to optimize resources and budgets across the organization, to understand reasons for cost increases and find tools to manage them.

Manage Capacity Costs for Sustainable Operations

Managing capacity costs as business needs evolve is an investment in the future and ensures sustainable operations.

Cost-Effective Implementations

Strategies that consider the inter-connectivity and interdependency of costs to resources, lead to successful and cost-effective implementations.

Sometimes you migrate things you have no idea what it does and how useful it is, you migrate just in case one client might be using it. Find this technical debt and see what is happening… spin up when it is needed and nOps will tell you that it isn’t being used… you launch it one morning and shut it down the next morning.

Adjust Cloud Presence and Load-balance Across Regions

With nOps you can pinpoint instances where utilization is extremely low within the specified time period. This will allow you to find candidate instances, which you can then switch to smaller sizes.

Check Cost Impact of Cloud-Server Configuration Change

With nOps Github Actions you can see the cost impact of any change that you make to your server configuration. You will see the cost impact in your local environment before the changes are pushed to the repository. The nOps Github Actions, will show you a table similar to the one below with the help of pre-commit hooks:

ProjectPreviousNewDiff
terraform_project1$167.04$83.38$83
terraform_project3–$24.91$24.91
terraform_project4$83.38$83.38$0.0
terraform_project2$200.45–$200.45

Build Your Own Application on Top of the nOps SDK

Using the nOps open-source SDK you can build your own custom application on top of the SDK and integrate the cost impact check in your own CI/CD pipeline.

To learn more about how you can integrate the nOps CLI into your own CI/CD pipeline, see nOps Precommit Client, which illustrates how nOps CLI was integrated into GitHub Actions.

Fully Automated Cost Retrieval with the nOps GitHub Actions

The nOps SDK is public, you can pull it down and integrate it in your own workflows. Anyone can build on top of our SDK, and this nOps CLI – which also is public and accessible to all – is the ultimate example.

What the nOps CLI does is that whenever you make terraform code changes and create a pull request, the CLI uses pre-commit hooks and GitHub Actions to get the estimated cost impact for your IAC projects impacted by the pull request code changes:

When you create a pull request, the GitHub Actions show the cost impact of the changeset in the form of a cost difference table:

It wasn’t nOps who built the nOps CLI, it was built by the community, and it is the open-source community that maintains it.

Getting Started with nOps CLI and GitHub Actions

Prerequisites

In addition to the nOps SDK Prerequisites, you also need to ensure that you use a GitHub repository. A GitHub repository is essential since nOps uses GitHub Action to provide cost differences when you create a pull request for changes to Terraform IAC projects.

nOps CLI also requires Terraform installation to detect the Terraform changes and to build the required specs for nOps SDK, where the SDK will act on JSON specs.

Installing the nOps CLI

You can install and execute the nOps CLI independent of the nOps SDK – according to your requirements – in three different ways as a CLI, pre-commit hook, and GitHub Action. All you have to do is:

  • Install the nOps CLI independently
  • Install the nOps pre-commit hooks
  • Use the nOps GitHub Action

To learn more about the independent installation and execution of the nOps CLI, see nops-precommit-client GitHub public repository.

Install the nOps pre-commit Hooks

You can Install the pre-commit hook using a simple pip command:

pip3 install pre-commit

Once the pre-commit hook is installed, use the nOps .pre-commit-config.yaml file in your repo to enable the nOps Pricing Hook and nOps Dependency Hook.

For more information on pre-commit see: https://pre-commit.com/. To learn more about the nOps pre-commit hooks, see nops-precommit-client GitHub public repository.

Use the nOps GitHub Action

GitHub Actions are workflows that are triggered when a specific event occurs in your repository. The nOps GitHub action provides cost differences, for changes made to Terraform IAC projects when you create a pull request. The action runs pricing checks for Terraform projects configured as a part of the nOps-action.yml.

To learn how you can use the nOps GitHub action, see nOps GitHub Action.

Account Module

In addition to the Pricing module and Cloud Infrastructure module, the CLI also makes use of the nOps SDK Account module. This module exposes the Account class, which provides an entrypoint into interacting with the cloud accounts of your profile.

To learn more about the Account module, see Account Module.

Custom Automation of Cost Retrieval with the nOps CLI and SDK

The nOps CLI is open-source and publicly available. It was built on top of the nOps SDK to automate the retrieval of cost impact of change set with the help of pre-commit client and GitHub action for Terraform projects on GitHub.

You can use the open-source CLI and the nOps SDK to build your own custom automation for your preferred CI/CD pipeline.

To get a sneak peek of how you can create your own custom automation, see nOps Precommit Client to get a sense of how the community was able to achieve full automation.

To implement the full automation, the community used these SDK modules:

# To estimate cost impact of a IaC changeset and display pricing using nOps SDK.

from nops_sdk.pricing import CloudCost

from nops_sdk.cloud_infrastructure.enums import AWSRegion

from nops_sdk.cloud_infrastructure.cloud_operation import Periodicity

 

# An entrypoint into interacting with the cloud accounts of your nOps profile to get the cloud resource dependencies

from nops_sdk.account.account import Account

 

# The main entry point to the nOps SDK to manage nOps accounts.

from nops_sdk.api import APIClient

To learn more about these SDK modules, their purpose, and functionality, see nOps SDK.

The community also created these CLI utilities, constants, subcommands, enum inputs, and libraries to achieve the full automation for cost retrieval:

from nops_cli.utils.logger_util import logger

 

# The generic alias for Terraform resource name.

from nops_cli.constants.resource_mapping import TERRAFORM_RESOURCE_MAPPING

 

# Interact with nOps CLI.

from nops_cli.utils.execute_command import execute

 

# Get the terraform outputs/states for nops pricing API's.

from nops_cli.subcommands.dependancy.terraform_dependency import

from nops_cli.subcommands.pricing.terraform_pricing import TerraformPricingTerraformDependency

 

# nOps pricing dependencies

from nops_cli.constants.input_enums import Periodicity, IacTypes

 

# Get the terraform outputs/states for nops dependency API's.

from nops_cli.libs.terraform import Terraform

from nops_cli.libs.get_accounts import NOpsAPIClient

To learn more about these utilities, constants, subcommands, enum inputs, and libraries, see this nOps precommit client GitHub repository.

Manual Retrieval of Cost Impact with the nOps SDK

The nOps SDK consists of several modules, but this Solution Doc — evaluating the cost impact of a Terraform changeset — focuses on the Pricing module and the Cloud Infrastructure module.

These two modules collectively form the basis of evaluating the cost impact of a changeset. Here is an example Python code snippet that show how these two modules work together to get the cost changes:

>>> from nops_sdk.pricing import CloudCost

>>> from nops_sdk.cloud_infrastructure.enums import AWSRegion

>>> from nops_sdk.cloud_infrastructure.cloud_operation import Periodicity

>>> spec = [

        {

            "new_data": {"instance_type": "t2.micro"},

            "old_data": None,

            "operation_type": "create",

            "resource_type": "ec2",

            "ami": "ami-0269f532"

        },

        {

            "new_data": {"instance_type": "t2.nano", "ami": "ami-00bb6f60"},

            "old_data": {"instance_type": "t2.micro", "ami": "ami-0269f532"},

            "operation_type": "update",

            "resource_type": "ec2"

        },

        {

            "new_data": None,

            "old_data": {

                "instance_class": "db.t2.micro",

                "engine": "oracle-ee",

                "license_model": "bring-your-own-license",

                "multi_az": True

            },

            "operation_type": "delete",

            "resource_type": "rds",

        },

    ]

>>> cloud_cost = CloudCost(aws_region=AWSRegion('us-west-2'), spec=spec)

>>> cloud_cost.load_prices()

****After you load the prices, you can compute and output prices for any supported `Periodicity` at no significant cost.****

>>> cloud_cost.compute_cost_effects(period=Periodicity('monthly'))

>>> cloud_cost.output_report()

Create t2.micro EC2 instance with a monthly cost impact of 8.35

Delete db.t2.micro RDS instance with a monthly cost impact of -9.79

Update t2.micro EC2 instance to t2.nano EC2 instance with a monthly cost impact of -4.18 

The example above is how you can access the nOps SDK programmatically.

Pricing Module

The nOps Pricing module provides estimated costs for Terraform Infrastructure as Code (IaC) projects using GitHub pull requests.

The pricing module exposes the nops_sdk.pricing.CloudCost class, which is then used to estimate cost impact of a IaC changeset.

To learn more about the Pricing module, see Pricing Module.

Cloud Infrastructure

This module provides Enum and Cloud Operation classes which form the backbone of nOps SDK’s cloud pricing and dependency functionality.

To learn more about the Cloud Infrastructure module, see Cloud Infrastructure Module.

nOps SDK Further Capabilities

The AWS product families that nOps Pricing module supports are:

  • EC2= ‘ec2’
  • RDS= ‘rds’
  • EKS= ‘aws_eks_cluster’
  • EKS_NODE_GROUP= ‘aws_eks_node_group’
  • propertyresource_class: Resource

EC2

This snippet is an example of the EC2 specs in Python for calling the nOps SDK:

{

     "new_data": {"instance_type": "t2.micro"},

     "old_data": None,

     "operation_type": "create",

     "resource_type": "ec2",

     "ami": "ami-0269f532"

}

RDS

This snippet is an example of the RDS specs in Python for calling the nOps SDK:

{

     "new_data": None,

     "old_data": {

         "instance_class": "db.t2.micro",

         "engine": "oracle-ee",

         "license_model": "bring-your-own-license",

         "multi_az": True

     },

     "operation_type": "delete",

     "resource_type": "rds",

}

EKS Cluster

This snippet is an example of the EKS Cluster specs in Python for calling the nOps SDK:

{

     'id': None,

     'resource_type': 'aws_eks_cluster',

     'operation_type': 'create',

     'old_data': None,

     'new_data': {

        'name': 'devopsthehardway-cluster',

     }

}

EKS Node Group

This snippet is an example of the EKS Node Group specs in Python for calling the nOps SDK:

{

     'id': None,

     'resource_type': 'aws_eks_node_group',

     'operation_type': 'create',

     'old_data': None,

     'new_data': {

         'cluster_name': 'devopsthehardway-cluster',

         'instance_types': ['t3.xlarge'],

         'node_group_name': 'devopsthehardway-workernodes',

         'scaling_config': [

             {

             'desired_size': 1,

             'max_size': 1,

             'min_size': 1

             }

         ],

     }

}

To access the nOps programmatically via the nOps SDK, all you need to do is Install nOps SDK.

To learn more about the nOps SDK and how you can use it, see nOps SDK and nOps SDK Documentation.

Getting Started with nOps SDK

Prerequisites

Before you can use the nOps SDK for evaluating the cost impact of a changeset, you must configure your AWS cloud accounts to allow nOps to pull metadata from the accounts during your setup.

You can do this by using one of the following methods:

  • CloudFormation (Adding an AWS account to nOps with Automatic Setup)
  • Manually setting up IAM Roles, Policies, and S3 Buckets (Adding an AWS account to nOps with Manual Setup)

For more information about cloud account configuration, see Getting Started.

The nOps SDK’s Pricing module displays costs for projected resource changes when you use a Terraform project to implement them (It predicts cost changes for Terraform changes).

In this section of the solution doc, we will walk you through how you can install and begin using the nOps SDK today. All you have to do is:

  • Installing the nOps SDK
  • Configure the nOps API Key

Refer to the nOps SDK Documentation to learn about this process in depth.

Requirements

Before you continue, make sure you have:

  • Python 3.9 or newer
  • requests
  • boto3
  • nOps API Key

To learn more about how to get the nOps API key, see Create an API key.

Installing the nOps SDK

The nOps SDK and nOps CLI are available in the form of a Python library that you can install from TestPyPI with a simple pip command:

pip install --index-url  https://test.pypi.org/simple/ nops-sdk --extra-index-url https://pypi.org/simple

Follow the instructions in the nOps SDK documentation to learn about the installation procedure.


Note: Currently nOps SDK only displays costs for resource changes you make, or plan to make to EC2, EKS, and RDS resources types for AWS Cloud accounts.


NAT Gateway Visibility#

Nat Gateway, as necessary as it is to a cloud infrastructure, is also a major point of concern when it comes to costs. One little misconfiguration in a routing table can cost thousands of dollars every single day for traffic flow that could have been virtually free. It is also a cause of concern when you are unable to determine who or what is causing a spike in traffic and cost.

nOps NAT Gateway Visibility solves this “Why is this so expensive” problem that customers face. It allows you to determine the source and destination of the NAT traffic. It also provides a breakdown of flow direction, cost, and data (in Gigabytes) of each network interface instance in a given time period:

Value of NAT Gateway Visibility

At its core, NAT Gateway Visibility allows nOps customers to identify where and from whom the increase in cost is coming from. A few major value points of this feature are given below.

PinPoint the Cause of Cost Increase

Identify the source address, destination address, and the exact network interface ID that is causing an unexpected increase in cost.

Locate Misconfigurations in Routing Tables

Staying within the Amazon network is free, i.e., traffic going through a private subnet to S3 Gateway is free. But with bad configuration all this traffic might be going through the NAT causing an unprecedented increase in costs.

Network traffic in a VPC going through the NAT to talk to another box inside the same VPC, this is a bad configuration 100% extra cost. Higher costs can also be a result of subnets talking to internal subnets using the NAT Gateway when there was no need to, costing 100% more than it should. nOps NAT Gateway Visibility offers you an easy method to locate all such misconfigurations.

Find the Top Perpetrators of Cost Increase

Use the NAT Gateway Visibility feature to quickly find out the top perpetrators of the cost increase. You have the option to filter the perpetrator based on the cost (top 50, top 75, and top 100) within a time period (1 week, 2 week, MTD, 3 months, and 6 months).

Substitute NAT Gateway with VPC Endpoints

With the help of the source address, destination address, source AWS service, and destination AWS service against each network interface ID, NAT Gateway Visibility allows you to pinpoint the traffic that can be rerouted via VPC endpoints instead of NAT Gateway.

Learn about your Traffic Trends

You can learn about your traffic trends with the help of Resource Spend History chart which breaks down the spending based on timestamps.

Traffic Flow Direction and Resulting Cost

Increase in cost is not always the result of bad configuration, sometimes you just need to know the IPs with the most traffic resulting in an increase in overall cost.

For such cases, the nOps NAT Gateway Visibility feature also provides you the ability to see which IPs are communicating, to whom they are communicating with, and what is the cost of this communication. It also shows you the flow direction, ingress or egress.

Navigate the NAT Gateway Visibility in nOps

NAT Gateway Visibility features utilizes the VPC Flow log record published in the S3 bucket in Parquet format, see the Getting Started section for more details.

In order to get the insights from NAT Gateway Visibility:

  1. From the nOps Dashboard, go to Cost > Cloud Resources Cost.
  2. In the Cloud Resources Cost panel, go to the Resources tab.
  3. From the resource list, click on a NAT Gateway resource to open the Resource Details panel.
  4. In the Resource Details panel, switch to the Cost History to see the details of NAT Gateway Visibility.

Resource Spend History

You can learn about your traffic trends with the help of Resource Spend History chart which breaks down the spending based on timestamps.

You can also filter the Resource Spend History based on Usage Types, Operations, Top X, and a timeframe:

You can also unchecking the resources for which you do not wish to see the history:

Network Interface Flow Logs

The logs available in this section are updated every 10 minutes and are aggregated by day.

Network Interface ID, Flow Direction, Source Address, Destination Address are all the fields that allow you to find out the path of the network traffic.

The source IPs belong to the NAT Gateways and the destination IPs belong to a location on the internet.

Getting Started with NAT GateWay Visibility

To get started with the NAT Gateway Visibility feature you need to:

  • Create a flow log that publishes to Amazon S3.
  • Configure the flow log to provide nOps the required fields using the Paraquet log format.

Note: To learn more about how you can log IP traffic using flow logs, see Flow Logs.

nOps Required Flow Log Fields

nOps requires the following flow log configuration:

  • traffic_type: “ACCEPTED”
  • log_format:
    • bytes
    • dstaddr
    • end
    • flow-direction
    • pkt-dst-aws-service
    • pkt-dstaddr
    • pkt-src-aws-service
    • pkt-srcaddr
    • srcaddr
module "s3_nops_prod_vpc_flow_logs_bucket" {
source = "./logging-s3-bucket"

bucket_prefix = "${local.config.identifier}-"
tags = local.config.tags
aws_org_id = local.config.aws_org_id
}

resource "aws_flow_log" "default" {
log_destination = module.s3_nops_prod_vpc_flow_logs_bucket.bucket_arn
log_destination_type = "s3"
traffic_type = "ACCEPTED"
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
log_format = "$${bytes} $${dstaddr} $${end} $${flow-direction} $${pkt-dst-aws-service} $${pkt-dstaddr} $${pkt-src-aws-service} $${pkt-srcaddr} $${srcaddr}"
tags = merge(
local.config.tags,
{
Name = local.config.identifier
},
)
}

Troubleshooting Tips

  • If you don’t see any data in the NAT Gateway Visibility feature then check if there are any spaces in the name of the flow log file. Make sure that there are no spaces in the name of the flow log file that create and publish to S3.
  • If you still don’t see any data in the feature then give it a little time to update. It will take a little time for the data in the NAT Gateway Visibility feature to update the first time a flow log file is created.

Utilize nOps Resource Scheduler with EventBridge Integration to Reduce Costs Automatically#

The nOps EventBridge integration allows our customers to act on optimization recommendations provided by our AI-driven platform.

The nOps EventBridge integration allows our customers to act on optimization recommendations provided by our AI-driven platform. Use our simple to deploy lambdas to automatically remediate or schedule remediation for common cloud waste issues, customize events to build your own workflow, or use our scheduler app to automatically enable and disable groups of EC2 and RDS instances.

EventBridge is an easy way for doing integrations with AWS Infrastructure, and it also supports multi partner event sources.

In nOps, EventBridge integration is closely tied to the nOps Resource Scheduler. There are many layers to the Resource Scheduler, and one of those layers is the EventBridge integration and configuration.

Use Cases

Automate any nOps Optimization Recommendation:

  • Clean up cloud waste
  • Schedule, stop, and start resources
  • Terminate instances
  • Rightsize instances
  • Configure Reserved Instances

Types of Events that nOps EventBridge Publishes

A catalog of detail types that illustrate the types of events your integration publishes. Include this information as part of your customer-facing documentation.

  • Infrequently accessed S3 bucket(s)
  • Underutilized resources
  • Infrequently accessed EFS
  • ECS cluster with underutilized resources
  • S3 rightsizing recommendation
  • EC2 instance rightsizing recommendation
  • RDS instance rightsizing recommendation
  • DynamoDB low utilization
  • CloudWatch log groups underutilized resources
  • IOPS performance check
  • DynamoDB throughput check
  • Unused AWS resource details
  • EC2 slow network traffic details
  • RDS instance idle
  • Unattached workspace directories

Sample Event

Event payloads can be customized, but the default event format for our MVP rule set follows this pattern:

{

    "version": "0",

    "id": "d2949071-573b-c67a-93a5-530604aec254",

    "detail-type": "nOps notification for 'rule'",

    "source": "aws.partner/nOps.io.test/012345678912/nops_uat_notification_12345_AWSDEMOUSEAST",

    "account": "012345678912",

    "time": "2022-09-21T09:21:23Z",

    "region": "us-east-1",

    "resources": [

 

    ],

    "detail": {

        "account_number": "012345678912",

        "description": "sg-abcd1234",

        "details": {

            "_id": "78ee385ce11c818238e57663240372eb",

            "compliant": "False",

            "name": "default",

            "timestamp": 1645920000,

            "violation_date": "2022-02-27",

            "violation_type": "unrestricted_ssh"

        },

        "item_hash": "item_hash",

        "name": "unrestricted_ssh",

        "region": "us-east-1",

        "time": "2022-02-27"

    }

}

EventBridge Integration

EventBridge Integration makes it easier for nOps to automate workflows in the client’s environment.

Amazon EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated Software-as-a-Service (SaaS) applications, and AWS services.

With EventBridge integration, nOps will be able to:

  • Automate events based on nOps rules.
  • Trigger an automation to reduce the size of underutilized EC2 instances.
  • Purchase or exchange RI automatically risk-free, if RI utilization is not optimized. To see the nOps list of risk-free commitments, go to ShareSave Dashboard > List of Opportunities > List of Risk-Free Commitments.
  • Turn on and off the EC2 and RDS instances in a group with the Resource Scheduler.
  • Trigger messages in multiple services other than the event messages.

To add nOps as an EventBridge partner, go to AWS > Amazon EventBridge > Partner event sources, and select nOps as one of the options to listen. Click on Setup and configure the options:

Once the EventBridge integration is set up, you can create event sources directly from the nOps application and deploy them to any account and region that you’ve connected to nOps.

To integrate your AWS EventBridge with nOps, log in to nOps and click User Avatar > Organization Settings > Integrations > EventBridge. In the Event Bridge tab click + Create EventBridge:

In the Create New Event Bridge page:

  1. Create a name for this EventBridge.
  2. Select the AWS account you want to deploy the EventBridge into. In the AWS accounts list, you will only see the accounts that you onboard into nOps.
  3. Select the region you want to deploy the EventBridge into.
  4. Click Create.

When you click Create, nOps will deploy the EventBridge into your selected AWS account and region by creating an event bus in the selected AWS account.

To see the event bus that nOps just created, go to AWS > Amazon EventBridge > Event buses > Custom event bus.

The next step is to add an EventBridge target into Webhooks.

Adding EventBridge Target into nOps Webhooks

nOps has a Webhook for almost every cost optimization rule in the nOps environment.

When you create a Webhook, anytime an associated event is fired a message will be sent to either the endpoint you define or the EventBridge that you create.

To add an EventBridge target into nOps Webhooks, go to User Avatar > Organization Settings > Integrations > Outgoing Webhooks. In the Outgoing Webhooks tab, click + Create New Webhook:

In the Create New Webhook page:

  1. Create a name for the Webhook.
  2. Select an Event. Notice that the Event option has two fields, in the first field you need to select the event category, and in the second you need to select the actual event.
  3. Select an Endpoint, it can be either a simple Endpoint (Target URL) or an EventBridge. In the scope of this documentation we are only going to select the EventBridge option. Selecting this option will send the Webhook to your EventBridge.
  4. Select an EventBridge from the list. The list consists of all EventBridges that you create in nOps > Organization Settings > Integrations > EventBridge tab.
  5. Click Save.

EventBridge Automatic Configuration Walkthough

When you create an EventBridge in nOps, nOps will automatically create an event bus in your selected AWS account. To see the event bus that nOps automatically created, go to AWS > Amazon EventBridge > Event buses > Custom event bus.

Once, you have configured the webhook, come back to the nOps > Organization Settings > Integrations > EventBridge tab and click Launch Stack against the EventBridge that you created:


Note: Before you click Launch Stack, make sure that you are logged into the AWS account that you used for the EventBridge. You also need to make sure that the account region is the same as the one you specified while creating the EventBridge.


When you click Launch Stack, you will be redirect to AWS > CloudFormation > Stacks > Create stack:

You can follow the link highlighted above to see the CloudFormation template. To see the CloudFormation template right now, see Scheduler CloudFormation Template.

Check the acknowledgement box, and click Create stack. The EventBridge is now ready and configured.

EventBridge Manual Configuration Walkthrough (Not Recommended)

This is an example of manual EventBridge configuration with AWS SQS.

Create a Target Queue in AWS – SQS

First, we will create a target queue in AWS – SQS. To create a queue, go to AWS > Amazon SQS > Queues > Create queue. In the Create queue page, enter a suitable name for the queue in the Name field and leave all other fields as is. Click Create queue.

Next we will create an EventBridge.

Configure the Event Bridge in your AWS Account

When you create an EventBridge in nOps, nOps will automatically create an event bus in your selected AWS account. To see the event bus that nOps automatically created, go to AWS > Amazon EventBridge > Event buses > Custom event bus.

Now, you need to create an EventBridge rule in your AWS account. To create an EventBridge rule, go to AWS > Amazon EventBridge > Rules and click Create Rule.

In the Event bus option, select the event bus that nOps created for you. The name of the event bus will be the same as the EventBridge that you created in nOps.

When you click next, the next step is Build Event Pattern. In the Build Event pattern section, select the Custom patterns (JSON editor). In this example we will use the following patterns:

{

    "detail-type": [ {

        "anything-but": "initializing"

    }  ]

}

For the full list of nOps Event Patterns, see nOps EventBridge Event Patterns.

Enter the Event pattern and click Next:

The next step is Select target(s).

In the target list, there are a lot of targets that you can select. In the future we will also provide numerous Lambda functions. In the scope of this example, we will select the SQS queue and select the queue you created in the beginning of the walkthrough, and then click Next:

The next section is the Configure tags section, which is optional. In the scope of this example, you don’t need to add any tags, simply click Next:

The next section is the final section, Review and create. In this section review the rule and click Create Rule.

Send Demo Message

In order to check if the EventBrdige was configured correctly, we will send a test message to Amazon SQS via the Webhook that we created. To send the test message , go to nOps > Organization Settings > Integrations > Outgoing Webhooks and click the icon highlighted below against the Webhook:

Now, to check if the message is received, go to your AWS > Amazon SQS > Queues > [The queue you created]. In the queue, click the Send and receive messages button:

On the Send and receive messages page, click the Poll for messages button:

If the configuration was successful, you will receive the test message in the Messages section.

Resource Scheduler Recommendations

nOps identifies opportunities to turn computers, resources, instances, and databases off at certain hours. Expensive environment when not being used still costs if they are not turned off.

nOps looks at your usage statistics and makes recommendations to schedule resources for availability during a specific time frame when they are expected to be used. You can then schedule these resources to start and stop using the Resource scheduler.

To see the Resource Scheduler Recommendations and the potential savings that scheduling resources can provide, go to nOps > ShareSave Dashboard > List of Opportunities > Resource Scheduler:

Click on an opportunity to expand it. nOps will provide you with a recommendation of schedule against each instance type and an estimate of the potential savings:

You can either create a schedule directly from the Resource Scheduler section by clicking the Schedule Now button against each Resource Scheduler recommendation, or you can use the Scheduler dashboard. To learn about how to schedule and using the Scheduler dashboard see the next section.

Scheduler Dashboard

The nOps Resource Scheduler is driven by the EventBridge, when you create a schedule it is the EventBridge that makes it possible to turn resources and databases off when they are not being used.

To create a schedule go to nOps > Scheduler dashboard and click + Create New Schedule:

When creating a schedule you can define a group of resources and set an EventBridge as a target for this schedule:

Let’s say resources are only being used Monday to Friday 8am to 8 pm and not being used at other times, you can schedule for them to start and stop at the time when they are being used to save costs.

nOps will continue to update the Resource Scheduler recommendations for the machines that could be shut down. Based on these recommendations you can define a new schedule and add the recommended to an existing schedule group.

nOps Cost Optimization Recommendations#

nOps provides several forms of cost optimization recommendations ranging from unused, underutilized, and infrequently accessed resources to resource rightsizing recommendations.

In the nOps platform, you can find cost optimization recommendations on the:

  • Rules > Cost tab:
    • Unused
    • Underutilized
    • Infrequently Accessed
    • Rightsizing
    • Miscellaneous
  • Cost > Resource Rightsizing page:
    • EC2 Rightsizing
    • RDS Rightsizing
    • S3 Rightsizing

nOps Rules Cost Recommendation

The cost optimization recommendations in the Rules > Cost tab provides cost optimization recommendations with details of the appropriate step you can take.

Following is a list of cost optimization recommendations that nOps provides in the Cost page. The list is not exhaustive, nOps is constantly adding more recommendations that will help you reduce costs:

  • Unused:
    • Unused AWS EBS Volumes
    • Unused AWS Elastic IP (EIP) Resources
    • Unused AWS NAT Resources
    • Unused Azure Disk Storage
    • Unused Azure Network Interfaces
    • Unused Azure Public IP Addresses
    • Unused Azure Virtual Machine
    • Unused Azure Virtual Network NAT
  • Underutilized:
    • Underutilized AWS CloudWatch Log Group
    • Underutilized AWS EBS Provisioned IOPS
    • Underutilized AWS ELB Resources
    • Underutilized AWS RDS Provisioned IOPS
    • Underutilized (% read/write) CosmosDB Containers
    • Underutilized (% read/write) DynamoDB Tables
    • Underutilized (%) ECS Cluster
    • Underutilized (% capacity) EC2 Instances
    • Underutilized Azure Virtual Machine
  • Infrequently Accessed:
    • Infrequently Accessed AWS S3 Bucket
    • Infrequently Accessed Azure Storage Account Buckets
    • Infrequently Accessed Azure Storage Account Resources
  • Rightsizing:
    • Review EC2 Instances (low traffic)
    • Review EC2 Instance Size
    • Review S3 Storage Class
    • Review MySQL Instance Size
    • Review Azure MariaDB Instance Size
    • Review Azure PostgreSQL Instance Size
  • Misc:
    • Disabled Autoscaling CosmosDB Containers or Databases
    • Disabled Autoscaling DynamoDB Tables
    • Unattached Workspace Directory

Resource Rightsizing Recommendations

Rightsizing is one of the best methods to bring cloud costs under control. nOps provides resource rightsizing recommendations for these AWS services:

  • EC2
  • RDS
  • S3

Each service has its own dedicated tab with recommendations:

To reduce costs by rightsizing, nOps allows you to analyze instance performance, usage patterns and needs continuously. nOps then provides cost optimization recommendations to turn off idle instances and to rightsize instances that are either poorly matched to the workloads or over-provisioned.

On the Resource Rightsizing page, against each resource, nOps provides:

  • AWS Account
  • Region
  • Resource Name/ID
  • Current Configuration
  • Suggested Configuration
  • Unused CPU(%)
  • Current Monthly Cost
  • New Monthly Cost (If the Suggested Configuration is implemented)
  • Monthly Savings

Rightsizing is an ongoing process since resource needs are constantly changing. nOps simplifies both resource analysis and monitoring, which enables you to make rightsizing a regular part of your cloud management process.

nOps analyzes your resourse and provides cost optimization recommendations based on the state and usage of resources. nOps categorizes the resources and recommendations based on these states:

  • Steady State: In the steady state, the load remains at a constant level for some time. It is even possible to forecast the compute load at any one time. For this type of usage pattern, consider Reserved Instances. They can yield significant savings.
  • Variable and predictable: For such instances, the load varies over time but on a predictable schedule. AWS Auto Scaling is ideal for applications that exhibit stable demand patterns, weekly, daily or hourly usage variability. You can use AWS Auto Scaling to scale EC2 capacity whenever there is a spike or fluctuation in traffic.
  • Dev/test/production: Turn off production, testing, and development environments in the evening since organizations usually use them during business hours.
  • Temporary: Do you have temporary workloads with flexible starting times that you can interrupt? Avoid using an on-demand instance. Instead, place a bid on an Amazon EC2 Spot Instance.

The nOps Process

The recommendations of nOps are based on a rigorous process that is often unique for each form of recommendation.

Following are examples of how nOps comes to the conclusion that rightsizing is required and offers you the recommendation to act upon.

AWS EKS

Let’s say you have an AWS EKS instances that is m6i.xlarge, nOps looks at no less than 23 different metrics and concludes the right size of the instance according to your current need:

Instance Typem6i.xlarge
max_cpu_usage_max1
max_ram_usage_bytes3220504279
max_cpu_cores_limit0
max_ram_bytes_limit3221225472
min_cpu_cores_limit0
min_ram_bytes_limit0
total_instance_count12993
total_pod_count12993
min_cpu_cores_allocated_by_nodegroup0
max_cpu_cores_allocated_by_nodegroup0
min_ram_bytes_allocated_by_nodegroup0
max_ram_bytes_allocated_by_nodegroup3221225472
Memory17179869184
vCPU4
mem_usage_diff_bytes13959364905
mem_usage_diff_percent0.8125419789575972
cpu_usage_diff3
cpu_usage_diff_percent0.75
mem_limit_diff_bytes13958643712
mem_limit_diff_percent0.8125
cpu_limit_diff4
cpu_limit_diff_percent1.0
RecommendationPlease update the instance type to m5d.2xlarge

In this case, nOps recommends that you update your instance from m6i.xlarge to m5d.2xlarge.

AWS CloudWatch

If you have CloudWatch enabled, nOps will collect 6 key metrics for every CloudWatch enabled EC2 instance in your environment:

  • NetworkIn
  • NetworkOut
  • DiskReadOps
  • DiskWriteOps
  • CPUUtilization
  • mem_used_percent

For each instance in your environment, nOps will make the following calculations:

  • Network average
  • Harmonic mean of disk read and write
  • Disk read and write averages
  • Average network in / out utilization to six points of precision
  • Average memory utilization
  • Average CPU utilization

nOps continuously monitors a 30 day sample of your utilization data and matches your CPU requirements to the latest offerings in the AWS pricing catalog in order to select the best match for your resource requirements.

If you don’t have CloudWatch enabled, nOps will still recommend the latest offering upgrades for your given instance class, when they are available.

Installations#

Install Amazon CloudWatch on EC2 Instances#

How to install CloudWatch to an EC2 Instance to use the nOps Resource Rightsizing feature

This topic describes how to install Amazon CloudWatch for your AWS EC2 instances, EC2 instances that are EKS worker nodes, and then how to view the Memory Metrics through the nOps application.

  1. Install Amazon CloudWatch for EC2 Instances
  2. CloudWatch for EKS Managed EC2 Nodes (EKS Worker Nodes)
  3. How to view Memory and Usage Metrics

Install Amazon CloudWatch on EC2 Instances

To utilize the Resource Rightsizing tool to collect metrics for memory instances, install the Amazon CloudWatch agent.

nOps checks the average memory utilization for an Amazon EC2 instance over a two-week period and recommends an instance size that has at least the average memory utilization available. For example: If the current instance type has 8GB of memory available, and the average memory utilization is 700MB over a two-week period, the rightsizing recommendation will suggest an instance type that has 1GB of available memory.

How to install Amazon CloudWatch:

  1. Log in to the instance using SSH.
  2. Run the following commands at the console to download and install the Amazon CloudWatch agent:wget https://s3.amazonaws.com/amazoncloudwatch-agent/debian/amd64/latest/amazon-cloudwatch-agent.deb sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
  3. Download and install the collectd daemon:sudo apt-get update && sudo apt-get install collectd
  4. Create the Amazon CloudWatch configuration file by running the Amazon CloudWatch configuration wizard: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
  5. Log in to the AWS IAM console and select the “Roles” menu item. Click the “Create role” button.
  6. On the “Select type of trusted entity” page, select “EC2” as the service to be associated with the new role. Click the “Next: Permissions” button to proceed.
  7. On the “Attach permissions policies” page, select the “CloudWatchAgentServerPolicy”. Click “Next: Tags” to proceed.
  8. On the “Add tags” page, add tags if required (optional). Click “Next: Review” to proceed.
  9. On the “Review” page, enter a name for the new role. Click “Create role” to proceed and create the new role.
  10. Once the role is created, click your username in the top right corner of the navigation bar and select “My Security Credentials” from the drop-down menu.
  11. On the “My security credentials” page, click the “Create access key” button.
  12. Note the new AWS access key ID and corresponding secret access key. You may want to save this to a file.
  13. Create an AWS credentials file with the AWS access key ID and shared access key at /home/bitnami/.aws/credentials with the following content. Replace the AWS-ACCESS-KEY-ID and AWS-SECRET-ACCESS-KEY placeholders with the keys obtained in the previous step:[default] aws_access_key_id=AWS-ACCESS-KEY-ID aws_secret_access_key=AWS-SECRET-ACCESS-KEY
  14. Edit the common configuration file for the Amazon CloudWatch agent at /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml and specify the path to the credentials file created in the previous step. sudo vi /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml
    Update the following content:[credentials] shared_credential_file = “/home/bitnami/.aws/credentials”
  15. Start the Amazon CloudWatch agent with the following command:sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
  16. Check that the agent is running with the following command:sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status

The steps described above will also configure the Amazon CloudWatch agent to automatically start on server reboot.

Learn more through the AWS Help Article: Installing the CloudWatch Agent.

CloudWatch for EKS Managed EC2 Nodes (EKS Worker Nodes)

You can skip this section if:

  1. You don’t have any EKS worker nodes.
  2. The EKS worker nodes are already configured correctly with CloudWatch.

If you do have EKS managed EC2 nodes (EKS worker nodes), then you need to install the CloudWatch agent on EKS worker nodes using preBootstrapCommands in order to read Memory and Disk score data, otherwise the only information that will be available to you nOps will be CPU usage.

If the CloudWatch agent is not installed for EKS worker nodes, or is configured incorrectly, CloudWatch will not report any metrics for such worker nodes. If this happens, the nOps Rightsizing Recommendations will not be able to take into account the metrics that are only available through CloudWatch.

There is different set of instructions for configuring CloudWatch for EKS managed EC2 nodes (EKS worker nodes) compared to vanilla EC2 instances.

To learn how to configure/install CloudWatch agent for EKS worker nodes, see Install the CloudWatch agent on Amazon EKS worker nodes using preBootstrapCommands.

Follow the instructions in the AWS documentation provided in the link above. It will enable CloudWatch to log the data correctly. Only then nOps will be able to provide accurate recommendations.

How to view Memory and Usage Metrics

Once CloudWatch is installed and the nOps app begins to receive the data you will be able to view resource details including memory and usage metrics.

To view metrics and usage

  1. Log into the nOps application.
  2. From a User Dashboard click on the Reports menu and select Cloud Inventory from the drop-down.
  3. On the Cloud Inventory page, filter the results from the left pane by selecting AWS and search for EC2 instances using the Filters options.
  4. Click on an instance to see details.
  5. The Resource Details page contains 3 tabs including Resource Details, Cost History, and Config History.
  6. The Resource Details tab displays an EC2 Usage Graph that shows usage over 1 week, 2 weeks and 3 months.
  7. You can change the CPU Utilization drop-down to see other options including Memory Used.
  8. To see information about this resource on AWS, click the View Resource on AWS Console button. You will be required to log into the AWS console to do this.

Install nOps K8s Agent#

This agent will run in the K8S environment to collect metrics and send them to nOps.

nOps K8s Agent shares the details of all the pods and containers in your K8s to nOps for cost optimization recommendations.

This worker contains a database to keep user entries, pulls metadata from their accounts on a scheduled basis, and publishes the output to kafka topic.

In this document you will learn how to deploy the nOps K8s agent via Helm Chart. nOps uses Prometheus, a monitoring app, where you write YAML files, nOps then bundle them in Helm creating a Helm Chart for monitoring K8s via the nOps K8s agent.

Before you get started with the installation of nOps K8s agent, make sure you set the Kubernetes context in which you want to deploy the agent (prod, dev, etc).

Requirements

The installation requirements of nOps K8s Agent are divided into “Development” and “Development and Deployment”.

For only Development, the requirements are:

  • Tilt
  • make

For Development and Deployment, the requirements are:

  • Tilt
  • Make
  • Helm
  • Kubernetes command line tool, kubectl, connected to any K8s cluster i.e., k3d.

Development

For simple Development of the nOps K8s Agent, copy the content of the ./charts/nops-k8s-agent-dev/values.yaml file to ./charts/nops-k8s-agent-dev/local_values.yaml file and enter your configuration details to local_values.yaml file.

You can use any K8s cluster provider, this example uses k3d:

# Launch cluster
make dev_infra

# Launch stack
make run

Development and Deployment

The Development and Deployment is divided into four 5 steps:

  1. Get the repository
  2. Create a namespace
  3. Deploy Prometheus
  4. Configure values.yaml
  5. Deploy Agent from Source Code or Deploy Agent via Helm Repo

Get the repository

To install the K8s agent, the first step is to get the latest release version of the agent. As the current release version uses a helm chart for deployment, to get the chart files, you need to clone the nOps K8s Agent public GitHub repository.

nOps also has a zip file and tar.gz file for every release in the repository.

Create namespace

The second step is to create a namespace for the nOps K8s agent using kubectl. Use these commands to create the namespace:

kubectl create namespace nops-k8s-agent
kubectl config set-context --current --namespace=nops-k8s-agent

Deploy Prometheus

The third step is to use Prometheus to deploy and launch your nops-k8s-agent namespace. Use these commands deploy and launch your nops-k8s-agent namespace:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack

You can also use your own Prometheus instance to launch the nops-k8s-agent namespace. If you don’t have Prometheus already installed, you will need to install it first.

Configure values.yaml

The values.yaml file (./charts/nops-k8s-agent/values.yaml) is a part of the Helm templating engine. It uses the configuration in the values.yaml file for the templates to create the K8s agent Helm Chart.

These are the variables required to create the K8s agent Helm Chart:

  • APP_PROMETHEUS_SERVER_ENDPOINT — Depends on your Prometheus stack installation (different for every person and every cluster).
  • APP_NOPS_K8S_AGENT_CLUSTER_ID — Needs to match your cluster ID.
  • APP_NOPS_K8S_COLLECTOR_API_KEY — See, nOps Developer API to learn how to get your API key.
  • APP_NOPS_K8S_COLLECTOR_AWS_ACCOUNT_NUMBER – The 12-digit unique account number of the AWS account, which is configured within nOps.

These above values if changed in the directory will become the new default. You can override these values during deployment of the agent via helm repo.

You can either use your own Chart values file, or you can modify and use our example setup_values script to fetch variable_env from Parameter Store SSM (Not encrypted):

# Patch values for env_variables using the SSM store.
python3 deploy/setup_values.py $CI_ENVIRONMENT_SLUG charts/nops-k8s-agent/values.yaml > /tmp/values.yaml

Deploy Agent From Source Code

Prior knowledge of K8s Helm is necessary to understand the commands and deploy the agent from source code. If you are not comfortable with using deploying from source code, you can deploy the agent via helm repo.

Following is the Helm command that will start the Helm Chart and deploy the K8s agent on the Kubernetes cluster:

# Upgrade chart.
helm \
upgrade -i nops-k8s-agent ./charts/nops-k8s-agent \
-f /tmp/values.yaml \
--namespace nops-k8s-agent \
--set image.repository=ghcr.io/nops-io/nops-k8s-agent \
--set image.tag=deploy \
--set env_variables.APP_ENV=live \
--wait --timeout=300s

If instead of fetching the value from the Parameter Store SSM you decided to change add the value in code, then omit the -f /tmp/values.yaml \ from the command.

Deploy Agent via Helm Repo

Before you deploy your agent via Helm repo, make sure your values.yaml file is correctly configured. See, Configure values.yaml to learn more.

Use these commands to deploy the agent via Helm repo:

helm repo add nops-k8s-agent https://nops-io.github.io/nops-k8s-agent
helm install -f values.yaml nops-k8s-agent

To deploy the K8s agent on multiple AWS accounts associated with nOps you can give additional values in the above command to replace the defaults you have set in the values.yaml file.

Release Management

Helm 2 client (CLI) and the Server (Tiller) — Creates the services and keeps track of release versions.

Helm 3 client (CLI) Server (Binary) — Release management feature is more challenging due to security issues.

Releases are available in the nOps nOps K8s Agent public repository.

Some helpful Helm commands to manage release versions:

helm install <chartname>
helm upgrade <chartname> // for changes to existing deployment instead of creating a new one.
helm rollback <chartname> // if anything goes south.

Install nOps AWS Lambda Forwarder Agent#

This agent will forward events from AWS CloudTrail into nOps.

In this document you will learn how to install the nOps AWS Lambda Forwarder Agent to forward events from your AWS CloudTrail into nOps via:

  • CloudFormation stack
  • Manual Setup

Requirements

Some of the requirements for installing the Lambda Forwarder Agent are:

  • AWS CloudTrail with an S3 bucket for CloudTrail logs must be configured before deploying this stack.
  • The S3 bucket for AWS CloudTrail, and nops-aws-forwarder should be within the same region.
  • API key from nOps. If you want to use an encrypted key, set up a symmetric encryption key within KMS in the same region of Lambda and provide the permission for Lambda execution’s role later.

Installation

The recommended way to install the Lambda Forwarder Agent is to use the CloudFormation stack, but if for some reason the installation fails or you don’t want to use CloudFormation, you can also install the agent manually.

CloudFormation

In order to start the installation process, log into your admin AWS account/role and click deploy the nOps AWS Lambda Forwarder CloudFormation stack to start the deployment of the Forwarder Agent.


Note: To take a look at the CloudFormation template, see nOps AWS Lambda Forwarder CloudFormation YAML Template.


When you click the deployment link, you will be redirected to AWS > CloudFormation > Stacks > Create stack.

In the Create stack page:

  1. Fill in pnOpsApiKey or pnOpsKmsAPIKey, pCTForwarderReleaseVersion, and pCloudtrailBucketName. All other parameters are optional.
  2. Click Create stack, and wait for the creation to complete:
  3. You can find the installed forwarder Lambda function under the stack’s “Resources” tab with logical ID rLambdaForwarder:
  4. If you use a KMS-encrypted API key, provide the access permission for the Lambda role for KMS Key

Repeat steps 1 to 4 in another region if you operate in multiple AWS regions with a single-region trail.

Manual

If you can’t install the Forwarder Agent using the provided CloudFormation template or you don’t want to use CloudFormation, you can install the Forwarder Agent manually:

  1. Create a Python 3.9 Lambda function using nops-aws-forwarder-deployment-package-<VERSION>.zip from the latest releases.
  2. Save your nOps API key to Lambda’s environment variable NOPS_API_KEY or encrypted KMS key as NOPS_KMS_API_KEY.
  3. Add the s3:GetObject permission to the Lambda execution role.
  4. Configure triggers.
  5. If you use a KMS-encrypted API key, provide access permission for the Lambda role for the KMS key.

Development

To update to a new version, run the following:

./deploy_scripts/bump_version.sh minor/major/main

Developer Documentation#

Getting started with the nOps developer API#

How to create an API key and how to sign your API requests

This topic describes the steps to create a signed request. This means that you must generate a key pair that is used to verify your signature.

Summary of steps

To create a signed request, you must complete the following steps.

  1. Configure an API key in nOps through the nOps app.
  2. Create a public/private key pair. Configure nOps with the public key to validate the signature.
  3. Compose your request.

Create a public/private key pair.

Create a key in the nOps application.

  1. Log into the nOps console and from the Settings pane click the API Key option
  2. Click on Let’s Generate Your API Key
  3. You will see a confirmation dialog that contains your newly generated key.
  4. Click the copy button on the right of the key, and save the key.
    You must download a copy of the key when it is generated as you will not be able to access it again.
  5. Click Okay.

Next you will generate a PEM certificate and extract the public key.

Create a Signature Verification Key pair

The following instructions are for Unix-based OS machines. For Windows platforms/OS we suggest using either OpenSSL or PuTTYgen.

Begin by opening a command window.

  1. Generate a PEM Certificate using the following command:
    $ openssl genpkey -out rsakey.pem -algorithm RSA -pkeyopt rsa_keygen_bits:1024
  2. Extract a public key from the PEM certificate by using the following command.
    $ openssl rsa -in rsakey.pem -pubout > key.pub
  3. Copy and paste this information into the nOps API Key Signature Verification field.
  4. Click Save.
    You now have a fully configured API Client and can use this key when you send a request to nOps.

Compose your request

Once you have the key, you can compose a signature string and sign it and send your request.

Signing your Request

The format for the signature is:
{client_id}.{date_str}.{url}?api_key={key}

The information for the signature request contains the following:

  • client_id: The Client ID is contained in the first part of API_Key that is returned.
    In this example key: api_key=123.aaaa4432454ccccb5a2280e755fdzzzz
    the api_key is 123
  • date_str: Use the yyyy-MM-dd format, such as 2022-11-07
  • url: Use a request url that does not include the domain name. Ensure that the url contains a trailing slash (/) or the signature will not be verified, and your request will fail. The signature must match exactly before it can be verified.
  • API key: Use the format ?api_key=xxx

For example:

123.2022-01-10./nops_api/v1/billingGetTotal/?api_key=123.aaaa4432454ccccb5a2280e755fdzzzz

The example signature shown above can only be used:

on the date: 2022-01-10
for the API URL of: /nops_api/v1/billingGetTotal/
for the client (123) indicated within the api_key request:
?api_key=123.aaaa4432454ccccb5a2280e755fdzzzz

The signature must match exactly for the signature to be verified.

To sign and encode your signature string using your private key.

Following is an example of how to create a key pair using Python instead of the prior instructions to create a key pair. However, in order to use the Python script you may first need to install pycryptodomex by using this command:

pip install pycryptodomex
See: https://pycryptodome.readthedocs.io/en/latest/src/installation.html

import binascii 
from Cryptodome.Hash import SHA256
from Cryptodome.PublicKey import RSA
from Cryptodome.Signature import pkcs1_15

key = RSA.generate(1024)
encrypt_key = key.export_key(pkcs=8)
public_key = key.publickey().export_key().decode()

message = "123.2022-01-10./nops_api/v1/billingGetTotal/?api_key=123.aaaa4432454ccccb5a2280e755fdzzzz"
encoded_string = message.encode()
byte_array= bytearray(encoded_string)
sha_bytes = SHA256.new(byte_array)
signature = pkcs1_15.new(encrypt_key).sign(sha_bytes)
signature = binascii.b2a_base64(signature)[:-1].decode("utf-8")

To send an API request

Include the encoded signature in the x-nops-signature header.

API requests should use the following format.

https://app.nops.io <enter_REST-URI>?api_key=<Client_key>

<enter_REST-URI> is the location of the endpoint for example: /c/admin/projectaws/

<Client_key> is the API key you generated within nOps in the Create a public/private key pair procedure earlier in this document.

See also:

nOps Public API documentation

GET and POST APIs for /projectaws and /projectaws/{id}#

Describes GET and POST API parameters

This API allows you to provision and query your cloud accounts within nOps. It is a useful tool to provision and credential many accounts or to manage their configuration. The available actions are: GET and POST.

The parameters are also listed on the Swagger pages and contain information about the structure (String, Integer or Boolean).

Any items marked with an asterisk * are Mandatory.

Retrieve a collection of Cloud Accounts

API Endpoint: /c/admin/projectaws/

Method: GET

GET Response ParametersTitleDescription
idIDUnique Integer value identifying this AWS Project.
name*NameDisplays the name.
clientClientThe Client number
access_keyAccess keyThe AWS access key.
access_typeAccess TypeType of AWS access (IAM, Role, etc.)
account_numberAccount numberThe account number associated with the query.
role_nameRole nameThe role name. Maximum length of 255 chars.
external_idExternal IDThe ID provided by AWS.
Maximum length of 255 chars.
arnARNThe AWS ARN.
bucketBucketString.
is_one_click_setupIs one click setupBoolean. Is set to true if one click setup was used.
report_nameReport nameThe name assigned to the report.
report_path_prefixReport path prefixThe report path prefix appended to your Cost and Usage Report (CUR)
statusStatusThe status code
cloud_typeCloud TypeThe cloud type (such as AWS or Azure)
createdCreatedThe created date expressed as yyyy-mm-ddThh:mm:ssZ
modifiedModifiedThe modified date expressed as yyyy-mm-ddThh:mm:ssZ
azure_cpvAzure cpvValue = false for an AWS query.
azure_csp_nceAzure csp nameValue = false for an AWS query.
tenant_nameTenant nameMax length: 255 chars. Specifies the tenant.

Provision a Cloud Account in nOps

This API is used to provision a cloud account within nOps. It is a useful tool to provision and credential many accounts or to manage their configuration.

Any items marked with an asterisk * are Mandatory.

API Endpoint: /c/admin/projectaws/

Method: POST

POST Response bodyTitleDescription
name*NameMinimum/Maximum character length 1/255. Enter a name
access_keyAccess keyThe access key.
secretSecretThe client secret.
access_typeAccess TypeThe access type
account_numberAccount numberThe account number
role_nameRole nameThe role name.
external_idExternal IDThe external ID.
statusStatusmaximum: 2147483647minimum: -2147483648
cloud_typeCloud typeThe cloud type such as AWS or Azure
azure_cpvAzure cpvValue = true if this is Azure
azure_csp_nceAzure csp nameValue = true if this is Azure
tenant_nameTenant nameSpecifies the tenant name

Retrieve information about a specific Cloud Account

API endpoint: /c/admin/projectaws/{id}/

Method: GET

Use to retrieve information about the specified project IDs:

GET Response Parameters for {id}TitleDescription
idIDUnique Integer value identifying a Project
nameNameThe project name
clientClientThe client name
access_keyAccess keyThe AWS access key.
secretSecretThe client secret.
access_typeAccess TypeDisplays the access type.
account_numberAccount numberThe account number for the specified project/s
role_nameRole nameThe AWS role.
external_idExternal IDThe associated external ID.
arnARNThe Amazon Resource name.
bucketBucketName of the S3 bucket.
is_one_click_setupIs one click setupBoolean: Value = true if one-click setup was used.
report_nameReport nameThe report name
report_path_prefixReport path prefixThe report path prefix for the Cost and Usage Report (CUR)
statusStatusInteger. maximum: 2147483647minimum: -2147483648
cloud_typeCloud typeThe cloud type such as AWS or Azure
createdCreatedThe created date expressed as yyyy-mm-ddThh:mm:ssZ
modifiedModifiedThe modified date expressed as yyyy-mm-ddThh:mm:ssZ
azure_cpvAzure cpvValue = false for an AWS query.
azure_csp_nceAzure csp nameValue = false for an AWS query.
tenant_nameTenant nameThe tenant name.

Update information about a specific Cloud Account

API endpoint: /c/admin/projectaws/{id}/

Method: POST

Use to provision a cloud account with the specified project ID:

POST Response bodyTitleDescription
name*NameMinimum/Maximum character length 1/255. Enter a name
access_keyAccess keyThe access key.
secretSecretThe client secret.
access_typeAccess TypeThe access type
account_numberAccount numberThe account number
role_nameRole nameThe role name.
external_idExternal IDThe external ID.
statusStatusmaximum: 2147483647minimum: -2147483648
cloud_typeCloud typeThe cloud type such as AWS or Azure
azure_cpvAzure cpvValue = true if this is Azure
azure_csp_nceAzure csp nameValue = true if this is Azure
tenant_nameTenant nameSpecifies the tenant name

Configurations#

Customize Your Settings#

Getting started with customizing the Settings page

Customize Your Settings

The Settings page has multiple areas to setup when first starting with nOps. It will configure areas such as integrations, notifications, custom tags and rules, SSO, and adding new users to the software.

Go to your profile on top right and click on Settings

On the Settings page:

There are multiple options to choose which settings should be enabled. Below is a list of options that can be selected to customize.

Team Members: Add additional users who will need access to nOps

Integrations: Setup integrations with Slack, PagerDuty, and Jira

Notifications: Receive Daily, Weekly, or Monthly notifications for Cost Changes, nOps Rules, Security Dashboard, SOC2 Readiness Reports, HIPAA Readiness Reports, and CIS Readiness Reports.

Jira Cloud: Integrate with Jira

Custom Rules: Write a query for specific rules across the 5 Pillars of AWS

SSO: Integrate with OneLogin or Okta

Default Tagging: Create Tags for your Resources

If you need further assistance please follow the help articles below or email: help@nops.io

Related Articles:

Adding Users

Integrations

How to use the Notification Center

How to Create nOps Custom Rules

How to Perform Default Tagging

Dashboard#

Create a Custom Dashboard#

How to Create a Custom Dashboard

Dashboards are a summary of long volume reports and analyses. nOps dashboards are not any different. They make it easier to get insights faster and easier with a sneak peek. There are a lot of metrics in nOps, so having a custom dashboard will be a lifesaver. Here are the steps involved in creating a custom dashboard to put just the metrics you need

On the home page, navigate to the Reports and click the menu item

On the drop-down that shows click on the Custom Dashboard menu item

This will lead to the Custom Dashboard page for creating a custom dashboard.

Click on the Custom Dashboard on the top right corner of the screen

On the new Window that popups up, enter the name of the Dashboard you wish to create and select the dashboard type

The page has two buttons displayed

  • Add Rules block: For adding the actual reports based on different nOps rules and different categories.
  • Add Sub Block: For adding a fresh report block for adding more rules.

To add a new rule, click the Add Sub Block button. This will show a pop-up a list of different nOps rules in different categories

For this example, I am configuring a security-based analysis dashboard. So all the rules I will be adding will be security rules. Other rules can be added if needed. Other rules on the screenshot include; Cost, Reliability, Operations, and Performance. To add a rule click the checkbox of the rule(s). After ticking the boxes. Click the Confirm button.

To add a rule click the checkbox of the rule(s). After ticking the boxes. Click the Confirm button.

When the rules applied are confirmed, click the Save Dashboard button to save the Dashboard.

To view the dashboard that was just created. Navigate to Reports>Custom Dashboard to view the list of Custom Dashboards. The dashboard will be among the list of other Custom Dashboards, Click on the Security Analysis dashboard that was just created.

Report Blocks can be added to the dashboard to create other headings and clear separation of different reports corresponding reports as has been done previously.

Rules#

Create Custom Rules#

Creating nOps Custom Rules

Create your own custom queries for specific resources. The Custom Rule will be added to reports and used as a filter.

Log in to your nOps Dashboard

Click on nOps Rules on the top menu bar to take you to the nOps rules page

When the page is open, Click on the “+” sign on the side menu.

This will give a pop-up box to write the Defined Query, the Rule Name and the Rule Label which corresponds to the Well-Architected Pillar the rule should apply to.

Click the Save button to save the rule that has been created

The new rule will be in the list of Custom Rules on the side bar. Click on the created rule to see the evaluations based on that has been configured on that rule.

Settings#

Configure SSO#

Configure SSO for nOps and for your SAML provider

How to Integrate SSO in nOps

Running a secure cloud system is very important. With the new nOps SSO feature, integrating SSO from your favorite SAML 2.0 provider is a smooth and easy process. You can currently integrate Okta, OneLogin, Azure Active Directory (Azure AD) amongst others.

Getting Started

To incorporate SSO in nOps, you need to configure the SSO for your SAML provider. To do that, you first need to get some credentials from your nOps dashboard.

Your nOps Credentials

  1. To access your nOps SSO credentials, navigate to your SSO Settings Page. Go to:
    Organizational Settings > SSO if you’re using the client portal
    or Partner Settings > SSO for the partner portal.
    You will be prompted to enable SSO for access to the SSO Settings page.
  2. Copy the Assertion Consumer Service and Entity ID values on the SSO Settings page and paste them into your SAML provider’s SSO configuration settings.
  3. Next you need to map some defined attributes. This should be done using the exact values as described. These attributes are called “Parameters” in OneLogin.
Map this Attribute valueTo this Attribute name
EmailUser.Email
First NameUser.FirstName
Last NameUser.LastName
GroupsUser.groups

When you are done, you will be provided setup instructions which you will then use to configure SSO on nOps.

Configuring SSO on nOps

After setting up SSO with your SAML provider, configure SSO on nOps.

To do that, you need some key credentials from Okta or OneLogin (i.e. your provider). They are:

  • Issuer URL (entityId)
  • SAML 2.0 Endpoint (HTTP) (singleSignOnService: URL)
  • X.509 Certificate

Copy these values and paste them in their respective input fields on the nOps SSO settings page shown below.

Assigning Users

After completing these steps, you can add existing users to your application. New users will need to complete a one-time email activation in order to have SSO enabled for them.

Additional Features

nOps has some new features that you can activate for your SSO integration.

Enable SSO Login

When you enable the Enable SSO Login toggle shown below, users will be redirected to the SSO login for authentication the next time they try to sign in and will only need to provide their email to Sign in to nOps.

Leaving this feature disabled will require users to log in with their current login password credentials. However, this is only possible for users who went through the nOps sign-up process.

Enforce SSO login

To enforce SSO login for all users, you must specify a domain in the input box and also select the Enforce SSO login for all domain users checkbox shown below.

Users coming from the specified domain address, must use the SSO Login process to sign in or they will be denied access.

If you however want to login from another domain name, you can copy the value shown in the Shareable Link for IDP Login and sign in using that.

Setting User Roles

This feature allows you to choose a default role for users. You can choose between:
– client-member and client-admin if you are using the Client nOps portal

OR
– partner-member and partner-admin if you are using the Partners nOps portal.

For the Partner portals: the partner-admin role can send invitations, configure SSO and access the partners’ clients. A partner-member role has limited access only to clients.

For the Client portal: the client-admin role has access to all available options including SSO while the client-member role has no access privileges to the Settings pages.

Control your SSO user groups

You can also control your SSO user groups by ​​setting an nOps role based on the SAML group. This feature is currently only available for Okta.

To enable this feature, you need to specify at least one value for admin and user groups.

In addition, you can also select the Allow SAML Group Configuration to Override nOps Role checkbox. This will give preference to nOps defined roles over that of your specified provider’s roles.

Update or Delete SSO Configuration

Lastly, you can update your SSO configuration or delete it entirely.

Note that deleting your SSO integration is irreversible. You cannot undo the deletion.

Configure SSO for Azure#

nOps supports SSO for Azure.

While implementing SSO (single sign on), we recommend opening 2 browser tabs. In one tab open and log into your nOps account, in the other open your OneLogin account. You will need to copy information from one application to the other in order to sync the information and to allow SSO access with OneLogin.

This topic is for Clients who log in using an Administrator Role. It assumes that you have nOps configured on your Azure AD portal.

To Set Up SSO on nOps

  1. Login to nOps and navigate to Organizational Settings from the profile link.
    Or as a Partner Admin role click on the SSO link.
  2. From the Settings pane click the SSO option.

If you do not have SSO configured you will see a dialog to enable it

  1. Click Enable SSO to go to the SSO Settings page.
  2. Enable the Enable SSO Login toggle.
  3. From the Select SSO Type drop-down, select Azure.

Now you need to add an SSO configuration on the Azure portal.

To Set Up SSO on Azure

  1. Login to the Microsoft Azure portal and click the Azure Active Directory widget to go to the Overview page
  2. Click + Add and select Enterprise Application.
  3. At the Browse Azure AD Gallery, search for SAML toolkit and click the icon when it’s displayed.
  4. At the Azure AD SAML Toolkit dialog enter a Name for this application and press enter. This may take a few minutes to save.
    Suggestion for name: nops-SSO
    After the name is entered you will be taken to the Overview page to continue to set up this application.

Assign users and groups and set up the single sign on (SSO)

  1. Begin assigning users by clicking the link in 1. Assign users and groups widget.
  2. At the Add Assignment page click + Add user/group from the toolbar.
  3. Click None Selected link and at the Users dialog enter search criteria to find and add users.The system may identify users that you can select.
  4. Click on the user/s to add them.
  5. At the Add Assignment page, click the Assign button to add the users you selected. You will see a success dialog and return to the Users and groups page.
  6. Once you have completed adding all users click the Overview tab in the left pane.

Set up the single sign on (SSO) widget

  1. Click the Get Started link in the 2. Set up single sign on widget.
  2. At the Single sign-on page select the SAML widget to open the SAML-based Sign-on pageYou will configure URLs and attributes by copying the information from nOps and pasting it into theBasic SAML Configuration page in Azure.
  3. From Basic SAML Configuration click Edit, then click Add identifier.
  4. Replace the Identifier (Entity ID) field with the Entity ID url from the nOps SSO page
  5. Replace the Reply URL (Assertion Consumer Service URL) with the Assertion Consumer Service URL from nOps
  6. Replace the Sign on URL in Azure with the Shareable Link for IDP Login url from nOps.
  7. Once you are done click the Save icon on the top left corner of the dialog.

Return to the Sign-on page to add attributes.

  1. Click Edit on the Attributes and Claims widget to add attributes.
  2. On Attributes & Claims dialog click + Add new claim to open the Manage claim dialog
    You will add 3 new claims. You must enter mandatory information for Name, Source and Source Attribute as seen in the following table. Save each claim before you add the next one.
NameSourceSource Attribute
User.FirstNameAttributeuser.displayname
User.LastNameAttributeuser.displayname
User.EmailAttributeuser.mail

3. Click Save to complete the configuration for the Azure portal

Entering information from the Azure portal to nOps

To complete the set up, copy the following items from the Azure portal to the nOps SSO page.

  1. From the SAML-based sign-on page navigate to section 3 the SAML Signing Certificate widget and click the Certificate (Base64) download link.
  2. When it is downloaded, open the download with a text editor such as NotePad (DO NOT USE WORD) and copy the contents of the certificate to the nOps X.509 Certificate field.
  3. From section 4 in the Azure SAML Sign-on page copy the Login URL into the SAML 2.0 Endpoint (HTTP) (singleSignOnService: URL) in nOps.
    Use this information to enter info the Issuer URL (entityId) field in nOps.
  4. Copy the Azure AD Identifier URL into the nOps Issuer URL (entityId).
  5. In the nOps SSO dialog navigate to User Roles/Groups. For Default role select client-admin to apply this role as a default for all users logging in from the Azure portal.
  6. Click Setup SSO Configuration to complete the setup. You have now completed the SSO set up on both nOps and on the Microsoft Azure portal.

Test your Set-up

You can now test your setup.

  1. From the Azure portal Saml-based Sign-on page click the Test button in section 5.
  2. At the Test single sign-on dialog click the Sign in as current user and click Test sign in.
  3. Navigate to the nOps webpage to see that you are being signed in through the Azure single sign on.

To create and add a Group configuration

  1. Click the Single sign-on tab in the left pane.
  2. Click + Add a group claim to add a group.

You will need to enter some advanced options for this claim.

  1. At the Group Claims dialog select Source Attribute: Group ID
  2. Then click the Advanced options link.
  3. Click the Filter groups (preview) checkbox and enter information for the 3 fields:
    Attribute to match: Display name
    Match with: Contains
    String: nops
    The string should match the name of the group you entered.
  4. Check the Customize the name of the group claim box
  5. Enter the Name for the attribute as: User.Groups
  6. Save the Group Claim
  7. Return to the Single sign-on tab. You should see user.groups added to the User Groups setting in Attributes and Claims section

Add the group to the Azure portal.

  1. Click on the Home in the breadcrumb links at the top of the page.
  2. From the Home page find and click Groups.
  3. At the Groups | All groups page click New group.
  4. For Group name, enter a name containing the String you entered earlier (nops). For example nops-group
  5. Click Create to return to the Groups | All groups page. And refresh the page to see the group you added. You can also search for it.
  6. Copy the Object ID for the group and enter it in the nOps SSO page under User Roles/Groups > Client Admin Groups field.
  7. Ensure that the Set nOps role based on SAML Group toggle is enabled.
  8. Then click Update SSO Configurations.

To test this group integration where a member of a group is automatically logged in as an Admin user.

  1. Return to the Home page in Azure Portal.
  2. From My Apps, select the Nops App you added and click on it.

You are directed to the nOps Web app login page and are automatically logged in since SSO was set up from the Azure portal.

Configure SSO for Okta#

This topic describes how to enable SSO for Okta for access to nOps

To enable single sign on for Okta users, you must configure an Application Integration in Okta and enter information into nOps

Information on creating an Application integration for Okta is also available through Okta online help.

After set up is complete, first-time users may need to confirm login permissions using the confirmation email.

You will need to copy and paste information from nOps into Okta and then from Okta to nOps. For ease of use, we recommend that you open 2 browser windows and log into both Okta and nOps as an Admin user.

The process contains the following steps:

Step 1: Copy information from nOps

Step 2: Set up nOps as a New App in Okta

Step 2A: From Okta Assign Users to the nOps application for SSO access

Step 3: Set up nOps Configurations and Defining Roles and Groups

Step 1: Information you need from nOps

Before you can set up nOps as an App in Okta you will need the following information from nOps.

  1. Log into nOps as an Administrator user.
  2. From nOps, open Organization Settings and navigate to the SSO integration page.
  3. Enable the SSO setting by using the toggle.
  4. Select Okta from the dropdown list.
  5. Copy URL information from the following fields in nOps, to add to OKTA in the next step.
    • Assertion Consumer Service
    • Entity ID

Step 2: Setup Okta SAML 2.0 Application

  1. Log into to Okta as an Admin user and navigate to Applications page
  2. Click Add Application
    • For platform select Web.
    • For sign on method, select SAML 2.0
    Instructions for creating SAML app integrations are also available on the Okta website.
  3. In General Settings enter an App name and click Next.
    App name suggestion: nOps.
  4. In the SAML settings page, enter information from nOps copied by you in Step 1:
    • In the Okta Single Sign On URL field enter AssertionConsumerService URL from nOps.
      Select the: Use this for recipient URL and Destination URL checkbox.
    • In the Okta Audience URI (SP Entity ID) field, enter EntityId information from nOps.
  5. Add the following Attributes and Group Attribute from the Attribute Statements (Optional) page.
    IMPORTANT: This information is a mandatory requirement .
    nOps Single Sign-On will not work if these are NOT configured. Ensure that you select the corresponding Value in the dropdown on the right.The name values for the configurations are case sensitive. Enter them exactly as seen. Click Add Another if or when you need to add additional statement rows.
    There is information on the nOps page about setting up attribute and group attribute statements. See also How to create a group in Okta.
    IMPORTANT: A group name cannot contain any spaces. You must add all potential users of nOps in the Okta group that you create.
    You must provide the Okta group name on the nOps SSC dialog to enable the group and allow access.
  6. In the next step, select: I’m an Okta customer adding an internal app.
  7. After the app is created, click View Setup Instructions.
    Copy the following items from OKTA to nOps
From OktaTo nOps SSO setting field
Identity Provider Single Sign-on URLSAML 2.0 Endpoint (HTTP) (singleSignOnService URL)
Identity Provider IssuerIssuer URL (entityid)
X.509 CertificateX.509 Certificate

Step 2A: From Okta, Assign Users to the nOps application

Only users assigned to use the nOps app you created, are enabled to use SSO when signing into nOps. You can do this step at any time and can edit users (Remove or Add) using this procedure.

  1. Under Directory from the main menu choose People.
  2. Click Add Person and enter information about the person.
  3. Click Save or Save and Add Another to add additional persons who will need SSO access to nOps.
  4. When you are done you should see a list of people in the Okta account.
  5. For each person you added, click Activate to activate the Okta SAML application.
  6. Go to Applications under the main menu and click on the Okta nOps SAML application you created.
  7. Under the Assignments field click Assign and find persons you want to assign to this application.Refresh your Okta application to let the changes take effect.
  8. You should now be able to see the nOps application you created in your Okta account.
  9. Clicking on the nOps app should take you to the nOps website to confirm the SSO login. nOps will send you a confirmation email.
  10. Click the email confirmation to login using SSO.

Step 3: Finish setting up nOps Configurations

Next navigate to the nOps SSO page. Ensure that you have copied the information from Okta.

From OktaTo nOps SSO setting field
Identity Provider Single Sign-on URLSAML 2.0 Endpoint (HTTP) (singleSignOnService URL)
Identity Provider IssuerIssuer URL (entityid)
X.509 CertificateX.509 Certificate

SAML configurations for attribute and group attribute mappings. Use the following mappings:

Map this Attribute valueTo this Attribute name
EmailUser.Email
First NameUser.FirstName
Last NameUser.LastName
GroupsUser.groups

Defining Roles and groups.

In the UserRoles/Groups section of the nOps SSO dialog:
Enter the name of the group you created in Okta in the Client Admin Groups or Client User Groups fields. Users of the group will have access to nOps through single sign on.

You can also check the box to specify whether the SAML group configuration can override their nOps role.

When you are done click Setup SSO Configuration.

Configure SSO for OneLogin#

How to implement SSO for OneLogin for Clients

nOps supports SSO using OneLogin.

While implementing SSO (single sign on), we recommend opening 2 browser tabs. In one tab open and log into your nOps account, in the other open your OneLogin account. You will need to copy information from one application to the other in order to sync the information and to allow SSO access with OneLogin.

This topic is for Clients using an Administrator Role.

You will complete the following steps:

  1. Configure OneLogin in nOps
  2. Sign in to OneLogin and set up nOps
  3. Set up OneLogin configurations on nOps
  4. Add Information from nOps to OneLogin
  5. Adding Parameters to OneLogin
  6. Setting up Users on OneLogin

Configuring OneLogin on nOps.

  1. Log into nOps as a Admin user and select Settings
  2. Click SSO on the left pane.
  3. At the SSO Settings pageEnable the SSO login toggleSelect the OneLogin optionfor Select SSO Type
  4. Now navigate to theOneLogin app. You will return to this page to add: Issuer URL (entityId), SAML 2.0 Endpoint (HTTP) , and X.509 Certificatebfrom OneLogin.

Sign in to OneLogin and set up nOps

  1. In a new browser tab, login to OneLogin and navigate to the Applications page.
  2. Click Add App.
  3. Search for SAML Test Connector (advanced) to find the SAML 2.0 connector andclick the icon.
  4. At the Add Connector dialog, you can change the Display Name, add an icon, and enter a description. Ensure that the Visible in portal toggle is turned on.
  5. Click Save.Once saved, you will see new tabs appear in the left pane.
  6. Click on Configurations.
  7. Copy the following configurations from the Enable SAML 2.0 page in the OneLogin app to paste into nOps OneLogin SSO page as described in the next section.
Copy from OneLogin fieldPaste into nOps field
Issuer URL (entityId)Issuer URL (entityId)
SAML 2.0 Endpoint (HTTP)SAML 2.0 Endpoint (HTTP)
X.509 CertificateX.509 Certificate

If required, use the one line format tool to generate a certificate.

https://samltool.com/format_x509cert.php

Setup OneLogin configurations on nOps

  1. If you are logged out of nOps, log in and go to the SSO settings screen as described in the topic above.
  2. Paste the configurations from OneLogin into the fields in nOps as described in the last step (Step 7) of the previous section.
  3. When you are done, click Setup SSO.
  4. Refresh the page to populate values for AssertionConsumerService and EntityID if they are not populated already. You will return to OneLogin to enter these 2 values.

Adding Information from nOps to OneLogin

Now that OneLogin is set up on nOps, you need to add the nOps settings to your OneLogin configuration.

  1. On OneLogin app page setting open the Configuration tab.
  2. From the nOps page, copy and paste configuration information from:
    EntityId into the Audience field on OneLogin.
  3. From the nOps page, copy and paste configuration information from the AssertionConsumerService into the following fields on OneLogin :
    – Recipient
    – ACS (Consumer) URL* and
    – ACS (Consumer) URL Validator* fields
  4. Click Save to save the settings and go to the Info tab.

Adding Parameters on OneLogin.

Add parameters to OneLogin so that you can sync the user names and other attributes between the two applications.

  1. From OneLogin SAML 2.0 connector page that you set up a previous section, navigate to the Parameters tab on the left pane. From here you will add 3 new fields.
  2. Click Add new field
  3. Enter field name:
    User.emailCheck the Include in SAML checkboxClick Save.In the Value field enter Email.
  4. Repeat the steps by clicking Add new field to add a field for:
    User.FirstName
    Check the Include in SAML checkbox
    In the Value field enter: First Name Click Save
  5. Repeat the steps by clicking Add new field to add a field for:
    User.LastName
    Check the Include in SAML checkbox
    In the Value field enter: Last NameClick Save.

Adding Users on OneLogin

Users added in OneLogin can be added to nOps for SSO. However you must first set up access for nOps

  1. From the OneLogin app click the Users tab from the top toolbar
  2. Click New User.
  3. Turn the Active toggle on.
  4. Enter information about the user for the: First Name, Last Name, Email, and Username fields.
  5. Click Save User.
  6. Navigate to the Application tab in the left pane and in the Application field click the + (plus) icon
  7. Add the SAML 2.0 application you created earlier to grant access for this user.
    Click on the More Actions dropdown and select Send Invitation.Upon receipt a user must click the link to accept it, and to set a password.
  8. Later when logging into OneLogin they will see the SAML 2.0 app.
  9. Clicking on the app directs the user to nOps.
    nOps sends an email requiring the user to confirm the SSO login.
  10. Once the confirmation is received, the user is able to log into nOps.

Configure Weekly Reports#

How to Configure Weekly Reports

This tutorial will show step by step how to configure weekly reports for a notification list

On the top left corner of the dashboard, where the name of the user is currently logged in; click on the arrow to reveal the drop-down menu. Click on the Notifications Settings menu item

Go to the Notifications Center on the left side of the screen.

In the Notifications center, it includes different categories Cost Changes, nOps Rules, Security Dashboard, SOC2 Readiness Report, HIPAA Readiness Report, and CIS Readiness Report

Select which category and in the section Users who you want to Notify (optional) enter in the email address or use the dropdown to select users that way,

Click the Create or Update Preferences button to create the new notifications list.

In the Notification Center, you may select Weekly Reports to be sent to the users.

This is where the weekly report for all the AWS accounts connected to the nOps account can either be activated or de-activated.

Configure Weekly Reports V3#

How to Configure Weekly Reports

This tutorial will show step by step how to configure weekly reports for a notification list

On the top left corner of the dashboard, where the name of the user is currently logged in; click on the arrow to reveal the drop-down menu. Click on the Settings menu item

This will take us to the Settings page. On the left-hand side menu, locate the Notifications Center menu item, and click on it

This will open the Notifications center. The notifications center allows you to configure notifications for different aspects of nOps system. The tabs show different aspects for which notifications can be configured for. They are; Cost, nOps Rules, Security Dashboard, SOC2 Readiness Report, HIPAA Readiness Report and CIS Readiness Report.

To enable a weekly report based on any of the different aspects of nOps; click on that tab and click the Users who you want to Notify (optional) drop-down and select the email address. There is also the option of Slack notification with the label Receive notifications on Slack (optional).

After selecting the email and then select the Weekly option in the Notify labeled item, click the Subscribe or Update Preferences. This will active weekly reports for that section of nOps.

Perform Default Tagging#

How to add a Default Tag

Tagging is very essential for all your resources in your AWS account, nOps makes tagging much easier. Let us show you how to use the nOps default tagging feature to tag resources.

Move the mouse to the top bar that contains the name of the logged-in user. On the drop-down that displays and click on the Settings menu item.

This will lead to the Settings page.

On the side menu bar, click on the Default Tagging menu item. This will lead you to the default tagging page.

Creating a Tag

Click on the Create New Default Tag

A form will pop-up on the side of the screen, where the information for tagging is to be filled in.

There is the option to select more than one AWS account, as long as the account is has been connected with nOps.

You can also add multiple tags to properly streamline to particular service or set of services. Click on the Save button when done, to create the tags.

This will create the tags and how in the list of tags with others that were existing.

Solution Providers#

Create New Recommendations#

How to Create New Recommendations

Log in to the Partner Dashboard.

Click on the Recommendations menu item, this will take you to the recommendations page.

On the Recommendations page, click the +Create new recommendation button. A pop-up menu will appear. Enter the Title of the recommendation and a Description of the recommendation:

Click on the Add Recommendation button to add the new recommendation to the list.

FAQ#

FAQ#

What is nOps?

  • nOps is a cloud management platform for AWS. It provides instant visibility to changes in your AWS infrastructure and enables change management, continuous cost & resource optimization, painless compliance & security audits, workflow automation with AWS Service Catalog, and automation of AWS Well-Architected Reviews.

Who created nOps?

  • nOps began as a set of methods, processes, and tools created by DevOps and cloud professional services teams at nClouds, an award-winning, certified AWS Premier Consulting Partner. From the start, services clients – ranging from startups to enterprise IT organizations – were very excited about these capabilities. Therefore, we expanded and productized the capabilities into nOps, a commercial SaaS offering.

What does nOps do?

  • Track and manage all your AWS cloud changes. Get instant visibility to change requests and delta to your infrastructure.
  • Monitor and optimize the usage cost for cloud infrastructure. Proactively reduce cloud costs by identifying zombie instances that were spun up, used, and abandoned with the meter running.
  • Create workflows integrated with AWS Service Catalog to centrally manage commonly deployed IT services.
  • Automate change authorization process with the built-in rules engine to auto-approve most standard changes.
  • For exceptions, create and trigger workflows easily that notify the right team members based on specific changes, sending the notifications, with context, via Slack, HipChat, and Jira.
  • Automatically log changes in config history in a modern CMDB, along with Jira tickets.
  • Go beyond current spreadsheet swapping to support security and compliance audits like SOC 2, with no pain.
  • Prevent and remediate incidents faster by giving SREs instant visibility to correlate changes.

How much does nOps cost?

  • Please reference the subscription options under Pricing

Will nOps cause my AWS billing to increase?

  • No, nOps helps bring visibility to your AWS costs. nOps is a web application that requires a read-only role to provide visibility to your entire infrastructure.

How does nOps use my AWS account details?

  • AWS account details are used to access logs and the AWS API. nOps does not access any application data.
  • nOps uses AWS CloudTrail and Amazon CloudWatch logs to create various dashboards.

How can I ensure that AWS account details are not compromised?

  • AWS accounts details are stored in a highly secure format. The nOps site is secured using SSL (Secure Socket Layer) and sensitive data is encrypted.
  • User sensitive data is encrypted. We first encrypt it in the browser then re-encrypt with a more secure algorithm (RSA 2048 and SHA-256) once it reaches our servers. All web connections are sent via 256-bit SSL.

Can I set up multiple AWS accounts in nOps?

  • You can add multiple AWS accounts in nOps and monitor the changes.

How can I obtain details for a particular server or infrastructure?

  • nOps provides a powerful search facility enabling you to search for a particular server or infrastructure.

How frequently is nOps data refreshed?

  • nOps data is refreshed every 60 minutes.
  • Billing data is fetched once a day.

Why might I see differences between my billing in AWS and the details in nOps?

  • There could be a difference due to credits, taxes, or adjustments made by AWS.
  • Also, there could be a difference due to timing. nOps ingests billing once every 24 hours from the daily billing files that AWS puts into your billing bucket.

Is nOps ITIL compliant?

  • We are actively working to make nOps ITIL compliant. We will announce compliance when it is ready.

What third-party integrations are available for nOps currently?

  • Currently, nOps supports integration with AWS Service Catalog, Jira Software, Slack, and email.

Can nOps help with a security audit?

  • Yes, nOps reports can help address security audit questions related to cloud changes.

Are there limits to how many events or change requests can be tracked with nOps?

  • No, there are no limits to the number of events or change requests.

What is Slackbot and how can it be used?

  • nOps Slack chatbot, or Slackbot, can help you to create change requests from a Slack channel. The nOps Slackbot monitors Slack channel conversations for keywords and will prompt for actions like creating a change request in nOps.
  • Privacy Policy
  • Terms of Use
  • Copyright 2023 nOps.io. All Rights Reserved