S3 Object Storage in Estonia — StorageVault

    S3-compatible storage for developers

    Works with every S3 tool: aws-cli, boto3, rclone, Terraform. Part of the Pilvio cloud — connect it to your servers over the private network, free traffic between VM and storage.

    • S3 API compatible
    • Data in Estonia
    • 15 TB free egress
    • Pay-as-you-go

    S3 storage that speaks your tools' language

    StorageVault is Pilvio's S3-compatible object storage. It uses the same API as Amazon S3 — meaning every tool and library that works with AWS S3 also works with StorageVault. You don't need to change your code. Swap the endpoint and credentials — everything else stays the same. Unlike standalone storage services, StorageVault is part of the Pilvio cloud. Your VMs and storage are in the same data center, connected through a private network. Data transfer between server and storage is fast (<1 ms latency) and private-network traffic is free.

    S3 API compatible

    aws-cli, boto3, s3cmd, rclone, Cyberduck, Terraform S3 backend — it all works. Standard S3 API, not a custom dialect.

    Automatic scaling

    Buckets have no predefined size limit. Add data and the bucket grows automatically. No upfront provisioning.

    Versioning

    Files aren't overwritten. Use versioning to keep file history and restore earlier versions.

    Unique URLs

    Each bucket gets a unique URL. Public files are reachable directly from the browser — perfect for serving static files.

    Built-in redundancy

    Data is protected against hardware failure. A separate backup inside the same cluster isn't needed.

    One platform

    Manage VMs and storage from the same dashboard and API. One invoice, one account, one support team.

    7 ways developers use StorageVault

    1.Terraform state backend

    Problem: Your Terraform state file is the source of truth for your infrastructure. If it lives locally, your team can't collaborate. If it lives in AWS S3, your Estonian infrastructure's state file sits in US jurisdiction.

    Solution: Use StorageVault as a Terraform remote state backend. The S3-compatible API means setup is similar to AWS — only the endpoint and a few compatibility flags differ.

    terraform {
      backend "s3" {
        bucket = "my-tfstate"
        key    = "production/terraform.tfstate"
    
        endpoint = "https://s3.pilw.io"
        region   = "eu-west-1"  # required by Terraform, any string works
    
        # Required for non-AWS S3-compatible providers:
        skip_credentials_validation = true
        skip_metadata_api_check     = true
        skip_requesting_account_id  = true
        skip_s3_checksum            = true
        force_path_style            = true
      }
    }
    
    # Terraform >= 1.11.2 checksum workaround:
    # export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
    # export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required

    Outcome: Terraform state lives in Estonia, the team can collaborate, and state locking prevents conflicts.

    2.Static-site caching layer (CDN backend)

    Problem: Your site reads data from an external API (Google Drive, Google Calendar, a CMS). Each page load triggers an API call — the site is slow and hits rate limits.

    Solution: A cron job or CI/CD pipeline pulls data from the API, writes it as JSON/HTML into StorageVault, and your site reads from there. StorageVault acts as a fast caching layer.

    # GitHub Actions schedule → Google API → JSON → StorageVault → site
    curl -s "https://api.example.com/data" \
      | aws s3 cp - s3://site-cache/data.json \
          --endpoint-url https://s3.pilw.io

    Outcome: The site loads fast (data is already prepared), no API call is made on every page load, and rate limits are a non-issue.

    3.CI/CD artifact store

    Problem: CI/CD pipelines produce build artifacts (Docker images, compiled binaries, test reports). GitHub Actions and GitLab CI artifact storage is limited and expensive.

    Solution: Pipelines write artifacts to StorageVault. Later pipeline steps or deploy scripts read them back.

    # GitHub Actions: upload build artifact
    aws s3 cp ./dist/app.tar.gz s3://builds/app-${GITHUB_SHA}.tar.gz \
      --endpoint-url https://s3.pilw.io
    
    # Deploy: pull artifact on production VM
    ssh production "aws s3 cp s3://builds/app-${GITHUB_SHA}.tar.gz /tmp/ \
      --endpoint-url https://s3.pilw.io && tar xzf /tmp/app-*.tar.gz -C /opt/app"

    4.Backup and disaster recovery

    Problem: Server backups need to live in a separate location. Backing up to the same server doesn't protect against hardware failure.

    Solution: Automated backup to StorageVault. Because StorageVault shares the Pilvio private network, transfers are fast and free — but the data is physically separated from the server.

    # Nightly database dump to StorageVault
    pg_dump mydb | gzip | aws s3 cp - \
      s3://backups/db-$(date +%Y%m%d).sql.gz \
      --endpoint-url https://s3.pilw.io
    
    # Sync web files
    rclone sync /var/www/html storagevault:website-backup/

    Outcome: Off-server backups on a schedule, no separate backup service required.

    5.Media and file serving

    Problem: Your app needs to serve a large number of images, videos, PDFs or other files. Storing them on a VM disk is expensive and doesn't scale.

    Solution: Keep media in StorageVault. Public files are reachable via their URL. Private files are served through signed URLs.

    import boto3
    
    s3 = boto3.client('s3',
        endpoint_url='https://s3.pilw.io',
        aws_access_key_id='YOUR_KEY',
        aws_secret_access_key='YOUR_SECRET'
    )
    
    s3.upload_file('photo.jpg', 'media-bucket', 'images/photo.jpg')
    
    # Signed URL (valid 1 h)
    url = s3.generate_presigned_url('get_object',
        Params={'Bucket': 'media-bucket', 'Key': 'images/photo.jpg'},
        ExpiresIn=3600
    )

    6.Static site hosting (JAMstack backend)

    Problem: JAMstack / static sites (Hugo, Next.js export, Astro) need a place to serve HTML/CSS/JS from.

    Solution: Build the site locally or in CI/CD, upload files to StorageVault, front it with Caddy or Nginx on a Pilvio VM. Or use the bucket's public URLs directly.

    # Hugo build + deploy
    hugo --minify
    aws s3 sync ./public/ s3://my-website/ \
      --endpoint-url https://s3.pilw.io \
      --delete

    7.GitHub → S3 → website pipeline

    Problem: Content lives in GitHub (Markdown, JSON, data) but the website can't read from GitHub directly (rate limits, latency).

    Solution: A GitHub Actions webhook/schedule pulls updated files, processes them, and uploads to StorageVault. The website always reads from StorageVault — fast and reliable.

    # .github/workflows/sync-to-s3.yml
    name: Sync content to StorageVault
    on:
      push:
        branches: [main]
        paths: ['content/**']
    
    jobs:
      sync:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v4
          - name: Sync to StorageVault
            env:
              AWS_ACCESS_KEY_ID: ${{ secrets.SV_ACCESS_KEY }}
              AWS_SECRET_ACCESS_KEY: ${{ secrets.SV_SECRET_KEY }}
            run: |
              aws s3 sync ./content/ s3://site-content/ \
                --endpoint-url https://s3.pilw.io \
                --delete

    Set up in 5 minutes

    Create a bucket, configure credentials, use any S3 tool.

    1

    Create a bucket

    Create a new bucket via the API or the web UI at app.pilvio.com → StorageVault → Create Bucket.

    curl -X PUT "https://api.pilvio.com/v1/storage/bucket" \
      -H "apikey: $PILVIO_API_KEY" \
      -d "name=my-bucket" \
      -d "billing_account_id=YOUR_ID"
    2

    Configure S3 credentials

    Get the access and secret key from the Pilvio dashboard (app.pilvio.com → StorageVault → Credentials).

    aws configure --profile pilvio
    # Access Key:    <from Pilvio dashboard>
    # Secret Key:    <from Pilvio dashboard>
    # Region:        us-east-1 (ignored, but required)
    # Output format: json
    3

    Use it

    Every standard S3 operation works — upload, list, sync.

    # Upload
    aws s3 cp myfile.zip s3://my-bucket/ \
      --endpoint-url https://s3.pilw.io --profile pilvio
    
    # List
    aws s3 ls s3://my-bucket/ \
      --endpoint-url https://s3.pilw.io --profile pilvio
    
    # Sync a directory
    aws s3 sync ./data/ s3://my-bucket/data/ \
      --endpoint-url https://s3.pilw.io --profile pilvio

    Compatible tools

    • aws-cli— Amazon's official CLI
    • boto3— Python SDK
    • s3cmd— Linux/Mac command-line client
    • rclone— universal cloud sync tool
    • Cyberduck— GUI client for Mac/Windows
    • Terraform— S3 backend for state
    • Veeam— enterprise backup
    • rclone + cron— automated sync / backup

    Simple pricing

    StorageVault uses pay-as-you-go billing. You pay only for the storage you actually use — buckets aren't pre-sized.

    ResourcePriceNotes
    Storage€0.025/GB/mo~€25/TB/mo
    Egress trafficFreeUp to 15 TB/mo
    Ingress trafficFreeUnlimited
    API requestsFreeUnlimited
    Pilvio private-network trafficFreeVM ↔ StorageVault

    Examples

    • 10 GB — Terraform state + config€0.25/mo
    • 100 GB — site media + backup€2.50/mo
    • 500 GB — app data + archive€12.50/mo
    • 1 TB — larger media/backup€25.00/mo

    Traffic between Pilvio VMs and StorageVault (private network) is always free. You never pay for moving data between your server and storage. Public egress is free up to 15 TB/month.

    Your files stay in Estonia

    StorageVault data lives physically in Pilvio's Estonian data centers. It never leaves Estonian jurisdiction.

    GDPR with no extra work

    If your app stores user data (photos, documents, personal data), having that data in Estonia is automatically GDPR-aligned. No third-country transfer impact assessment required.

    No CLOUD Act exposure

    Pilvio is 100% Estonian-owned. The US CLOUD Act, which lets the US government demand data from US companies, does not apply to Pilvio StorageVault. AWS S3, Azure Blob and Google Cloud Storage are all US companies.

    Low latency

    Users in Estonia and the Baltics see ~1–10 ms latency. AWS's nearest S3 region (eu-north-1, Stockholm) is ~20–30 ms.

    StorageVault compared

    Pilvio StorageVaultStoraderaAWS S3 (eu-north)Hetzner Obj. Storage
    Price/TB/mo~€25~€6~€23*~€5.24
    Data in Estonia✅ Tallinn + Amsterdam❌ Stockholm❌ Germany/Finland
    100% Estonian-owned❌ USA❌ DE
    CLOUD Act exposure❌ none❌ none✅ USA❌ none
    VM integration (same platform)✅ private network, free❌ standalone service✅ (EC2)❌ separate services
    Terraform provider
    Free traffic15 TBFreeMetered (expensive)1 TB
    API requestsFreeFreeMeteredFree
    Estonian-language support✅ ~5 min❌ EN❌ EN❌ EN
    Best fitDeveloper workflow, VM + S3 comboLarge backup / archiveGlobal ecosystemBackup for Hetzner VMs
    • * AWS S3 Standard, without egress and request fees (which add on top).
    • ** Hetzner Object Storage and Cloud are separate services with no shared private network.

    If the only criterion is cheapest €/TB for large backups, Storadera and Hetzner are lower. If you need S3 storage integrated with your Estonian server infrastructure — one platform, one invoice, private-network — StorageVault is the better fit. AWS S3 only makes sense if you're already inside the AWS ecosystem.

    FAQ

    What is StorageVault?+

    StorageVault is Pilvio's S3-compatible object storage. It uses the same API as Amazon S3 — every S3-compatible tool (aws-cli, boto3, rclone, Terraform S3 backend, Veeam, etc.) works directly. Data lives in Estonian data centers.

    Is StorageVault actually S3-compatible?+

    Yes. StorageVault supports the standard S3 API operations: PUT, GET, DELETE, LIST, multipart upload, versioning, signed URLs. Tools like aws-cli, boto3, s3cmd, rclone and Cyberduck work unmodified — just swap the endpoint (https://s3.pilw.io).

    How much does StorageVault cost?+

    Storage is €0.025/GB/month (€25/TB/month). Egress is free up to 15 TB/month. API requests are free. Private-network traffic between Pilvio VMs and StorageVault is free. No minimums, no long-term contracts.

    Can I use StorageVault as a Terraform state backend?+

    Yes, but you need a few compatibility flags. Use the backend "s3" block with endpoint https://s3.pilw.io and set skip_credentials_validation, skip_metadata_api_check, skip_requesting_account_id, skip_s3_checksum and force_path_style to true. On Terraform >= 1.11.2 also export AWS_REQUEST_CHECKSUM_CALCULATION=when_required and AWS_RESPONSE_CHECKSUM_VALIDATION=when_required (workaround for HashiCorp issue #37432). Your infrastructure state file will live in Estonia, not on AWS servers.

    Can I use StorageVault together with a Pilvio VM?+

    Yes. StorageVault and Pilvio VMs live in the same data center and share a private network. Data transfer is fast (<1 ms latency) and private-network traffic is free. Backups, filesystem sync and reading media from your app are all fast and incur no traffic fees.

    How does StorageVault compare to AWS S3?+

    Functionally StorageVault is an AWS S3 analogue — same API, same tools. Key differences: data lives in Estonia (not Stockholm or Ireland), Pilvio is not subject to the US CLOUD Act, egress and API requests are free (AWS charges for both), and StorageVault is integrated with Pilvio VMs via a private network. AWS S3 has a broader ecosystem (Lambda, CloudFront, etc.), but StorageVault is simpler and more transparent.

    How does StorageVault compare to Storadera?+

    Storadera specialises in cheap bulk S3 storage (~€6/TB/mo), aimed mainly at backup and archive. StorageVault (~€25/TB/mo) is integrated with the Pilvio cloud — you get VMs, storage, API and Terraform in one place. If all you need is cheap backup storage, Storadera is cheaper. If you need S3 as part of your development workflow (Terraform state, CI/CD artifacts, app media) alongside Estonian VMs, StorageVault is the better fit.

    Does StorageVault support versioning and signed URLs?+

    Yes. Versioning keeps file history and protects against accidental overwrites. Signed (presigned) URLs let you share private files through a time-limited link without making the bucket public. Use boto3's generate_presigned_url() or the aws-cli presign command.

    Try StorageVault free

    Create a Pilvio account, get €30 in free credit. Create your first bucket and upload files in under 5 minutes. The credit covers both StorageVault and VMs — test the whole platform.

    info@pilw.io | +372 521 68 08 | Telliskivi 57, 10412 Tallinn