A question: if I run the conversion cloud product self-hosted as a docker container in my AWS account, it’s my preference to grant access to the container in question to access the S3 buckets via ecs metadata, rather than by a separate key. Is this supported? i.e. if I pass S3_STORAGE_BUCKET but do not pass S3_STORAGE_ACCESS_KEY or S3_STORAGE_SECRET_KEY, will the code correctly rely on the aws credentials chain, using the metadata service to obtain its credentials?
Hi @cpopetz ,
According to documentation, yes, the approach described is supported by the AWS SDK for .NET, which uses the AWS credentials chain to obtain credentials. If S3_STORAGE_ACCESS_KEY
and S3_STORAGE_SECRET_KEY
are not provided, the SDK will fall back to using other sources in the credentials chain, such as the ECS metadata service, IAM roles, or the shared credentials file.
But for now, conversion cloud product self-hosted as a docker container, do not support unset values of any credentials - it will throw an exception.
Please confirm that you are ready to wait some time - and we make a hotfix that will adjust the code to handle the case where apiKey
or apiSecret
is null
or empty, we can deliver fixed image next week, I think.
@cpopetz
We have opened the following new ticket(s) in our internal issue tracking system and will deliver their fixes according to the terms mentioned in Free Support Policies.
Issue ID(s): CONVERSIONCLOUD-603
Hi Sergei,
That would be great, the timeline is fine because we can’t go live with this anyway until CONVERSIONCLOUD-599 is fixed (the http_only issue).
Thank you for the proposed fix!
-Clint
Hi, yes, CONVERSIONCLOUD-599 is still not fixed.
We have updated the image groupdocs/conversion-cloud:latest and made S3_STORAGE_ACCESS_KEY and S3_STORAGE_SECRET_KEY optional. Althought, we did not test how it works with AWS credentials chain, you can try this now. Please follow these recommendations:
- Your ECS task definition specifies a task role that grants access to the desired S3 bucket.
- The container is running in an environment where the ECS metadata service is accessible.
- Only provide
S3_STORAGE_BUCKET
as an environment variable, and leaveS3_STORAGE_ACCESS_KEY
andS3_STORAGE_SECRET_KEY
unset.
This setup will allow conversion-cloud instance to automatically use the ECS task role credentials through the AWS credentials chain.
We also have another tag: alpine and it’s not updated, please notify if you are using this tag instead of latest.
I will look forward your feedback about using AWS credentials chain.
Thanks Sergei, I will test the above change this week. Do you have an estimate fix date on CONVERSIONCLOUD-599, so that we can schedule the release of our feature that depends on groupdocs?
-Clint
Hi, About CONVERSIONCLOUD-599, we found out that ‘useHttp’ flag is not set in self-hosted version to access AWS S3, so we use all default settings for AmazonS3Client, except that are passed in Env vars - bucket, region and api keys, so It should work by https too.
We will try to update AWSSDK, this may help
Ok, now that the license question has been resolved, I have a groupdocs container up and running in my aws cloud under ecs, and I am able to hit it via swagger and authenticate with my client id and secret (as configured in the dashboard) and it claims my license is ok. However, attempts to convert or check the existence of a file in S3 storage do not succeed and they don’t tell me why. I am passing the following variables (cut off in the screenshot to elide the full keys) and also passing client_id and client_secret from my calling code in my app.
Screenshot 2025-02-04 at 5.53.59 PM.png (20.3 KB)
Trying to convert via my java rest client through the SDK and trying to verify the existence of files via swagger after authenticating both fail.
Is there a flag I can pass the container to get more debugging info from it?
Hi, Is there any error returned? Can you check out container’s default logs?
If that’s STDOUT, there’s nothing except the startup messages, see below. If there is another logging file, it isn’t described in the self hosting docs.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
[40m[1m[33mwarn[39m[22m[49m: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {ebe77fd4-07c8-4f84-9a3c-bcddbd007fcc} may be persisted to storage in unencrypted form.
[40m[1m[33mwarn[39m[22m[49m: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
Storing keys in a directory ‘/root/.aspnet/DataProtection-Keys’ that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
[40m[32minfo[39m[22m[49m: GroupDocs.Conversion.Cloud.Web.Startup[0]
The license has been set.
To clarify: should the client_id/client_secret be both specified in the docker environment variables and also in the rest api client Configuration? I am doing both but that seems odd, because it would seem like either the container would accept that as configuration and send it to your api server, or the client would send it and the container would pass it through, but not both.
Also, I am specifying S3_STORAGE_BUCKET as suggested, but when using the api I am also specifying the storageName, matching the name of the configured storage in the dashboard. Is that correct?
I figured it out, I needed to pass the the region in as an env var, which I had thought it would obtain from the default ecs metadata. So I now have conversions working correctly with the API in my container, thank you for your help!
Thats great!
BR, Sergei