Tag Archive for: AQUA

Aqua Security Collaborates with CIS to Create the First


BOSTON, June 22, 2022 (GLOBE NEWSWIRE) — Aqua Security, the leading pure-play cloud native security provider, and the Center for Internet Security (CIS), an independent, nonprofit organization with a mission to create confidence in the connected world, today released the industry’s first formal guidelines for software supply chain security. Developed through collaboration between the two organizations, the CIS Software Supply Chain Security Guide provides more than 100 foundational recommendations that can be applied across a variety of commonly used technologies and platforms. In addition, Aqua Security unveiled a new open source tool, Chain-Bench, which is the first and only tool for auditing the software supply chain to ensure compliance with the new CIS guidelines.

Establishing Best Practices for Software Supply Chain Security
Although threats to the software supply chain continue to increase, studies show that security across development environments remains low. The new guidelines establish general best practices that support key emerging standards like Supply Chain Levels for Software Artifacts (SLSA) and The Update Framework (TUF) while adding foundational recommendations for setting and auditing configurations on the Benchmark-supported platforms.

Within the guide, recommendations span five categories of the software supply chain, including Source Code, Build Pipelines, Dependencies, Artifacts, and Deployment (link to blog with overview).

CIS intends to expand this guidance into more specific CIS Benchmarks to create consistent security recommendations across platforms. As with all CIS guidance, the guide will be published and reviewed globally. Feedback will help ensure that future platform-specific guidance is accurate and relevant.

“By publishing the CIS Software Supply Chain Security Guide, CIS and Aqua Security hope to build a vibrant community interested in developing the platform-specific Benchmark guidance to come,” said Phil White, Benchmarks Development Team Manager for CIS. “Any subject matter experts that develop or work with the technologies and platforms that make up the software supply chain are encouraged to join the effort…

Source…

AWS Announces General Availability of AQUA for Amazon Redshift


SEATTLE–(BUSINESS WIRE)–Today, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced the general availability of AQUA for Amazon Redshift, an innovative new distributed and hardware-accelerated cache that delivers up to ten times better query performance than other enterprise cloud data warehouses. AQUA brings compute to the storage layer, helping customers avoid networking bandwidth limitations by eliminating unnecessary data movement between where data is stored and compute clusters. With AQUA, customers have more up-to-date dashboards, save development time, and their systems are easier to maintain. AQUA is available on Redshift RA3 instances at no additional cost, and customers can take advantage of the AQUA performance improvements without any code changes. To get started with AQUA, visit https://aws.amazon.com/redshift/features/aqua.

Since its launch in 2012 as the first data warehouse built for the cloud at a cost of 1/10th that of traditional data warehouses, Amazon Redshift has become the most popular cloud data warehouse. Last year, AWS announced the general availability of Amazon Redshift RA3 instances, which allow customers to scale compute and storage separately and deliver up to 3x better price performance than any other cloud data warehouse. However, even as the performance of data warehouses continues to increase, the rapid growth of data that customers need to process in their data warehouses has led to a difficult balancing act between performance and cost-effective scaling. The prevailing approach to data warehousing has been to build out an architecture whereby large amounts of centralized storage data are moved to waiting compute nodes for processing. The challenge with this approach is that there is a lot of data movement between the shared storage and the compute nodes. As data volumes continue to grow at a rapid clip, this data movement saturates available networking bandwidth and slows down performance. In addition to the networking bottleneck, CPUs are not able to keep up with the faster growth in storage capabilities (SSD storage throughput has grown 6x faster than the ability of CPUs to process data from…

Source…