We are using Amazon S3 for storing up to 500K data. We have a .NET 4.0 web service installed on EC2 instance, which makes the PutObject call with 500K data.
The problem is that when we make more than 100 simultaneous calls (with unique S3 keys) to this service, respectively to the S3, and the EC2 instance CPU is hitting 100%. We made some profiling on the web service and it shows that 99% of the processing time is taken by AmazonS3Client.PutObject method.
We've tried configuring the S3 client to use HTTP (instead of default HTTPS) , and also played a little with the S3 keys generation scheme, but nothing helped. This article Amazon S3 PutObject is very slow didn't help either.
Our S3 key schema is: "111_[ incrementing ID ].txt"
This extensive CPU usage does not happen if we use a shorter data - like less than 1K.
Can you give us some guidance what could be done to improve CPU performance or where to look?
And here's the source code for that call:
string fileName = "111_" + Id + ".txt";
using (AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(RegionEndpoint.XXXX))
{
try
{
PutObjectRequest request = new PutObjectRequest();
request.WithContentBody(dataRequest.Base64Data)
.WithBucketName(bucketName)
.WithKey(fileName);
S3Response response = client.PutObject(request);
response.Dispose();
}
catch (AmazonS3Exception amazonS3Exception)
{
//Handle exceptiom...
}
}
Thanks!