Amazon introduces new 'Data Pipeline' tool

Tools

Amazon last week launched Data Pipeline, a tool designed to help users integrate data from disparate sources. Locations within AWS could include data stored within Redshift, DynamoDB or the Simple Storage Service. Redshift is Amazon's (NASDAQ: AMZN) cloud-based data warehouse, while DynamoDB is the company's NoSQL database implementation.

As reported by Network World, Data Pipeline has a drag-and-drop graphic interface for the creation of automated or scheduled data workflows within AWS or an external location. All that is needed is to define a source, destination and requirement. A schedule must also be set up to automate the process. In a nutshell, Data Pipelining offers a data driven approach to serve applications optimized for AWS.

Amazon has been steadily beefing up its cloud-based infrastructure with new services this year. Just a few months ago, the company launched Amazon Glacier, a low-cost data archiving service where data can be stored at the cost of just one cent per gigabyte, per month. Because data is stored on multiple devices within multiple facilities, Amazon Glacier is said to offer an average annual durability of 99.999999999 percent.

For more:
- check out this article at ZDNet
- check out this article at Network World

Related Articles:
Amazon launches low-cost Amazon Glacier Data archival
Amazon launches cloud data warehouse with Redshift

Filed Under