S3

Information contained in the articles on this site may not be representative of actual use cases. The views expressed in the articles are personal views of the author and are not necessarily those of State Farm Mutual Automobile Insurance Company, its subsidiaries and affiliates (collectively “State Farm”). Nothing in the articles should be construed as an endorsement by State Farm of any non-State Farm product or service.
Breaking the Limits by Chad Prentice
Leveraging Cloud Architecture at Massive Scale

person taking photo with phone

As an insurance company, you’re bound to have a lot of documents to store and maintain. As a 100 year-old insurance company with over 87 million policies and accounts, State Farm REALLY has a lot of documents…to the tune of 12 billion documents weighing in at almost 3 petabytes of data. Those totals grow by more than 3 million documents (3 terabytes) a day. Think about an insurance claim for an auto accident. You have the photos of the damage, the estimate of the repair and the payment for…

READ MORE
DynamoDB Ingestion to an Enterprise Data Lake by Clete Blackwell II
A journey in discovering architectural patterns

DynamoDB Ingestion to Data Lake Design

Companies manage a lot of data – often having application data in many different data stores and in a variety of formats (e.g., File storage, DB2, PostgreSQL, Oracle, MSSQL, MySQL, MongoDB, DynamoDB, and more). That data can also live on many different physical data centers ranging from on-premises to vendor data centers to public cloud data centers. Each data store is created for a specific purpose, such as customer contact information, conversation logs, policy data, or purchase history.

READ MORE
Using Terraform for S3 Storage with MIME Type Association by Dillon Henn
Confidently deploy your content to S3 using Terraform - here's how we did it.

terraform

Today, many product teams may utilize the Amazon Simple Storage Service to store Single Page Application (SPA) resources. Oftentimes they do this by packaging infrastructure code with the application code. However, this tends to add complexity to the pipeline because the content for S3 must be uploaded and synched in a separate step.

READ MORE