Log In Sign Up

Docs

Storage destinations

How destination storage works: the default destination, connecting your own bucket, supported providers, credential fields, and how to access the finished file.

Every file Converterer produces, a converted image, a transcoded video, a rendered PDF, has to land somewhere. That somewhere is a destination: a storage location tied to your account that the API uploads finished files to. When a conversion-task or job reaches delivered, the file is in the destination’s storage at the file_name you set (or {id}.{output_format} / {id}.pdf by default). You retrieve it from your own bucket.

Destinations are configured in the dashboard, not through the public API. A destination can have one or more API keys, and each API key belongs to exactly one destination, so the key you authenticate with determines where that request’s output goes. Webhook subscriptions attach to a destination the same way.

You might create multiple destinations to send output to more than one bucket, or to keep credentials separate across environments such as staging and production.

You have two options: use the default destination (no setup, but files are temporary), or connect your own bucket on one of the supported cloud providers.

The default destination

Every account starts with a default destination already wired up. You do not need to bring your own bucket to start using the API. It’s backed by Converterer-managed Backblaze B2 storage, and each account gets its own isolated path within it.

When your API key points at the default destination, GET responses expose a default_path (the base URL your files live under) instead of the cloud_storage / bucket_name / region fields, which are returned as null.

⚠️ Coming soon: the default destination will purge files after 7 days.

File purging is not live yet, but it is planned: files in the default destination will be deleted automatically 7 days after they’re created. The default destination is meant for testing and short-lived workflows, not durable storage. Build on the assumption that these files are temporary: if you need to keep a converted file, download it from the default bucket and store it yourself (e.g. wget / curl the file URL into your own storage) rather than relying on it staying in the default bucket.

# When a conversion is delivered, the file is in the default bucket at
# {default_path}/{task_id}.{output_format}. Pull it into your own archive
# before the 7-day purge fires.
TASK_ID="9f1a8e7c-1b9b-4f0a-9d2c-1a2b3c4d5e6f"
wget -P ./my-archive/ "${DEFAULT_PATH}/${TASK_ID}.jpg"

For anything production-grade, connect your own bucket. Files there live as long as you keep them.

Connecting your own bucket

Converterer supports more than plain AWS S3. Alongside the S3-compatible providers it also speaks Azure Blob Storage and Google Cloud Storage natively.

Providercloud_storage valueStorage typeNotes
AWS S3s3S3Region picked from the AWS region list
Backblaze B2b2S3-compatibleRegion taken from the bucket endpoint s3.{region}.backblazeb2.com
DigitalOcean SpacesdigitaloceanS3-compatibleEndpoint {region}.digitaloceanspaces.com
Linode Object StoragelinodeS3-compatibleEndpoint {region}.linodeobjects.com
Vultr Object StoragevultrS3-compatibleEndpoint {region}.vultrobjects.com
Azure Blob StorageazureAzureUses a storage account + blob container; no region field
Google Cloud StoragegcloudGCSAuthenticated with a service-account JSON key file; no secret or region

Credential fields per provider

Each provider needs a different set of fields. The S3-compatible family (s3, b2, digitalocean, linode, vultr) is largely consistent; Azure and GCS differ.

Fields3b2digitaloceanlinodevultrazuregcloud
keyAccess keyKey IDKeyAccess keyAccess keyStorage account nameProject ID
secretSecret keyApplication keySecretSecret keySecret keyAccount Key1 or Key2-
key_file------Service-account JSON
bucket_nameBucket nameBucket nameBucket nameObject storage nameBucket nameBlob containerBucket name
regionrequired (select)us-east-001-stylerequired (select)e.g. us-southeastrequired--
acl_permission-public / privatepublic / private----

Validation rules applied when a destination is created or updated:

  • key and bucket_name are always required.
  • secret is required for every provider except gcloud (which uses key_file instead).
  • region is required for s3, digitalocean, linode, and vultr.
  • acl_permission is required for b2.
  • key_file is required for gcloud and must be a valid service-account JSON file.

Verification on save

When you add or change a destination, Converterer immediately verifies it by writing a small converterer.txt test file into the bucket. If the credentials, bucket name, region, or permissions are wrong, the save is rejected with a field-specific error. For example, a bad access key fails on key, a wrong region fails on region, and a missing bucket fails on bucket_name. This means a destination that saved successfully is known-good at that moment.

Note that this converterer.txt test file is not removed afterwards, so you may see it in your bucket. It’s harmless and safe to delete.

Accessing the finished file

The API returns IDs only, not URLs. When a task or job reaches delivered, the file is at a predictable path inside your destination’s bucket: the file_name you set on submission, or {id}.{output_format} (file conversion) / {id}.pdf (website capture) by default. You construct the URL or fetch the file yourself from your own storage.

How you do that depends on the bucket’s visibility:

  • Public buckets (the default destination, and any custom bucket set to public / public-read): the file is reachable via the bucket’s public URL with no extra auth, in a browser or via curl.
  • Private buckets: use your cloud provider’s SDK or signed-URL mechanism to fetch the file. The API doesn’t generate a signed URL for you.

For b2 and digitalocean you choose public vs. private per destination via acl_permission. For s3 and the other S3-compatible providers, object visibility follows whatever the bucket itself is configured for on the provider side.

Choosing a destination

Default destinationYour own bucket
SetupNone, works out of the boxConnect credentials in the dashboard
RetentionTemporary, 7-day purge plannedAs long as you keep them
Best forTesting, prototypes, short-lived jobsProduction, anything you need to keep
File accessPublic via the destination’s default_pathPublic or private, depending on your bucket’s settings

A common pattern: develop against the default destination, then switch your API key to a custom bucket (or issue a new key bound to one) before going to production.

Rotating API keys

You can issue and revoke API keys against any destination at any time from the dashboard. Revoked keys stop working immediately, with no grace period, so coordinate rotation with your deploys. Issuing a new key for the same destination is the simplest way to roll credentials without changing where your output lands.