Storage destinations
How destination storage works: the default destination, connecting your own bucket, supported providers, credential fields, and how to access the finished file.
Every file Converterer produces, a converted image, a transcoded video, a rendered PDF, has to land somewhere. That somewhere is a destination: a storage location tied to your account that the API uploads finished files to. When a conversion-task or job reaches delivered, the file is in the destination’s storage at the file_name you set (or {id}.{output_format} / {id}.pdf by default). You retrieve it from your own bucket.
Destinations are configured in the dashboard, not through the public API. A destination can have one or more API keys, and each API key belongs to exactly one destination, so the key you authenticate with determines where that request’s output goes. Webhook subscriptions attach to a destination the same way.
You might create multiple destinations to send output to more than one bucket, or to keep credentials separate across environments such as staging and production.
You have two options: use the default destination (no setup, but files are temporary), or connect your own bucket on one of the supported cloud providers.
The default destination
Every account starts with a default destination already wired up. You do not need to bring your own bucket to start using the API. It’s backed by Converterer-managed Backblaze B2 storage, and each account gets its own isolated path within it.
When your API key points at the default destination, GET responses expose a default_path (the base URL your files live under) instead of the cloud_storage / bucket_name / region fields, which are returned as null.
⚠️ Coming soon: the default destination will purge files after 7 days.
File purging is not live yet, but it is planned: files in the default destination will be deleted automatically 7 days after they’re created. The default destination is meant for testing and short-lived workflows, not durable storage. Build on the assumption that these files are temporary: if you need to keep a converted file, download it from the default bucket and store it yourself (e.g.
wget/curlthe file URL into your own storage) rather than relying on it staying in the default bucket.
# When a conversion is delivered, the file is in the default bucket at
# {default_path}/{task_id}.{output_format}. Pull it into your own archive
# before the 7-day purge fires.
TASK_ID="9f1a8e7c-1b9b-4f0a-9d2c-1a2b3c4d5e6f"
wget -P ./my-archive/ "${DEFAULT_PATH}/${TASK_ID}.jpg"
For anything production-grade, connect your own bucket. Files there live as long as you keep them.
Connecting your own bucket
Converterer supports more than plain AWS S3. Alongside the S3-compatible providers it also speaks Azure Blob Storage and Google Cloud Storage natively.
| Provider | cloud_storage value | Storage type | Notes |
|---|---|---|---|
| AWS S3 | s3 | S3 | Region picked from the AWS region list |
| Backblaze B2 | b2 | S3-compatible | Region taken from the bucket endpoint s3.{region}.backblazeb2.com |
| DigitalOcean Spaces | digitalocean | S3-compatible | Endpoint {region}.digitaloceanspaces.com |
| Linode Object Storage | linode | S3-compatible | Endpoint {region}.linodeobjects.com |
| Vultr Object Storage | vultr | S3-compatible | Endpoint {region}.vultrobjects.com |
| Azure Blob Storage | azure | Azure | Uses a storage account + blob container; no region field |
| Google Cloud Storage | gcloud | GCS | Authenticated with a service-account JSON key file; no secret or region |
Credential fields per provider
Each provider needs a different set of fields. The S3-compatible family (s3, b2, digitalocean, linode, vultr) is largely consistent; Azure and GCS differ.
| Field | s3 | b2 | digitalocean | linode | vultr | azure | gcloud |
|---|---|---|---|---|---|---|---|
key | Access key | Key ID | Key | Access key | Access key | Storage account name | Project ID |
secret | Secret key | Application key | Secret | Secret key | Secret key | Account Key1 or Key2 | - |
key_file | - | - | - | - | - | - | Service-account JSON |
bucket_name | Bucket name | Bucket name | Bucket name | Object storage name | Bucket name | Blob container | Bucket name |
region | required (select) | us-east-001-style | required (select) | e.g. us-southeast | required | - | - |
acl_permission | - | public / private | public / private | - | - | - | - |
Validation rules applied when a destination is created or updated:
keyandbucket_nameare always required.secretis required for every provider exceptgcloud(which useskey_fileinstead).regionis required fors3,digitalocean,linode, andvultr.acl_permissionis required forb2.key_fileis required forgcloudand must be a valid service-account JSON file.
Verification on save
When you add or change a destination, Converterer immediately verifies it by writing a small converterer.txt test file into the bucket. If the credentials, bucket name, region, or permissions are wrong, the save is rejected with a field-specific error. For example, a bad access key fails on key, a wrong region fails on region, and a missing bucket fails on bucket_name. This means a destination that saved successfully is known-good at that moment.
Note that this converterer.txt test file is not removed afterwards, so you may see it in your bucket. It’s harmless and safe to delete.
Accessing the finished file
The API returns IDs only, not URLs. When a task or job reaches delivered, the file is at a predictable path inside your destination’s bucket: the file_name you set on submission, or {id}.{output_format} (file conversion) / {id}.pdf (website capture) by default. You construct the URL or fetch the file yourself from your own storage.
How you do that depends on the bucket’s visibility:
- Public buckets (the default destination, and any custom bucket set to
public/ public-read): the file is reachable via the bucket’s public URL with no extra auth, in a browser or viacurl. - Private buckets: use your cloud provider’s SDK or signed-URL mechanism to fetch the file. The API doesn’t generate a signed URL for you.
For b2 and digitalocean you choose public vs. private per destination via acl_permission. For s3 and the other S3-compatible providers, object visibility follows whatever the bucket itself is configured for on the provider side.
Choosing a destination
| Default destination | Your own bucket | |
|---|---|---|
| Setup | None, works out of the box | Connect credentials in the dashboard |
| Retention | Temporary, 7-day purge planned | As long as you keep them |
| Best for | Testing, prototypes, short-lived jobs | Production, anything you need to keep |
| File access | Public via the destination’s default_path | Public or private, depending on your bucket’s settings |
A common pattern: develop against the default destination, then switch your API key to a custom bucket (or issue a new key bound to one) before going to production.
Rotating API keys
You can issue and revoke API keys against any destination at any time from the dashboard. Revoked keys stop working immediately, with no grace period, so coordinate rotation with your deploys. Issuing a new key for the same destination is the simplest way to roll credentials without changing where your output lands.