Org wide repo credentials
I'm putting the same user credentials in for all the image sources that pull repos from our org's GitHub account. Can cycle add shared credentials that can be referenced when creating new image sources?
I'm putting the same user credentials in for all the image sources that pull repos from our org's GitHub account. Can cycle add shared credentials that can be referenced when creating new image sources?
Hi Cycle team,
I noticed two new endpoints in the environment monitoring config for metrics and events. For custom events, it looks like the only option available right now is providing a destination URL.
We use Atlassian Opsgenie, which uses a global destination url and requires sending an API key to be included in the request header for authorization.
Example:
curl -X POST https://api.opsgenie.com/v2/alerts \
-H "Authorization: GenieKey API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "CPU usage critical",
"tags": ["cpu", "production"]
}'
It would be helpful to have an option to include an API key (or custom headers) as part of the configuration, so it can be sent along with the request.
This enhancement would enable direct integration with services like Opsgenie and other systems that require header-based authentication, improving flexibility and reducing the need for intermediary solutions.
Hi Cycle team 👋
We’d love to see support for encryption at rest — either at the server disk level or at the individual volume level.
For teams deploying workloads in third-party virtualized environments, this is becoming a pretty standard requirement.
⸻
Why We’re Asking
When running in a virtual provider environment, we don’t physically control the underlying hardware. Even though TLS handles encryption in transit, we still need guarantees around data stored on disk.
For many companies (especially those dealing with customer or regulated data), encryption at rest isn’t optional — it’s table stakes for production.
This impacts things like: • Enterprise security reviews • SOC 2 / ISO 27001 compliance • GDPR / HIPAA workloads • Internal security policies • Risk mitigation around snapshots / host access
Without it, some workloads just can’t move onto the platform.
⸻
What Would Help
Any of the following would be great:
1️⃣ Host-Level Disk Encryption • All server disks encrypted by default • Transparent to containers • Configurable per environment if needed
2️⃣ Volume-Level Encryption • Encryption on specific persistent volumes • Visible status in the UI and API • Clear documentation on how it’s implemented
3️⃣ Key Management Options (Stretch Goal) • Bring Your Own Key (BYOK) support • Key rotation visibility
While the current default of RAID1 configuration provides a reliable foundation for data integrity, it would be beneficial to have more flexibility in how Cycle handles storage.
In scenarios where we are utilizing distributed storage solutions like Garage, redundancy is already managed at the application level. In these cases, the ability to prioritize maximum storage capacity over local hardware redundancy would be highly advantageous. Furthermore, providing users with granular control over which specific storage devices are assigned to a container or VM would significantly improve resource optimization and environment customization.
With the new dashboard for environments, it would be super handy to add a dedicated circle for core services (like LBs, VPN, discovery, etc) and also something to signal that a core service has updates available on the cards as well as the dropdown list.
This would make it super easy to spot any updates/issues that are not related the the 100's of containers circle. Also maybe on the dropdown list a smaller range preview of the new Uptime bar.
Not a big deal, but one thing I often find myself annoyed by is when I restart an instance, having to open the instance to watch the instances and wait until they pass the health check and are ready. It would be nice if, for services with defined checks, the Instances column right now that currently shows a count of instances and a ring that indicates how many are running and how many are not could also somehow indicate how many are ready vs just running. Maybe with color-coding? Right now the ring just shows green for running without regard for readiness - maybe add an intermediate different color like blue for running but hasn't passed the health check yet and only go green once the instance is actually ready?
Hey all,
This one's a bit in the weeds but here's the context:
What I would like:
migration (the canonical example of this sort of workload is a database migration, hence the tag) where in a pipeline step I can just say "now stop all containers tagged migration. That'd very neatly solve my problem.In my specific case I could model all of these as function containers and I could also use a 'now stop all function containers' type grouping but I'd imagine that'd be much less broadly useful to others.
One feature I'd really love is the ability to execute a restart as a "rolling" restart. Right now, manual restarts (hitting the button, applying a config change, etc) stop all instances at once producing app downtime. And without a defined health check policy there's probably no way around that. But when a health check policy IS defined, I would love to be able to set the default restart method to a rolling restart where each subsequent instance restart does not begin until the previous instance reaches healthy status. That functionality would be incredibly valuable in such a wide variety of situations...
Hi everyone,
Over the holidays, a new MongoDB vulnerability was published that involves the ability to dump uninitialized server memory over the network without authentication. The attack is rather easy to exploit, and simply requires an out of date version of Mongo + using zlib compression.
We wanted to bring this to our community's attention, as many of you are running Mongo on Cycle. And, as many of you know, we use Mongo internally to power the platform. To be clear, Cycle itself was not affected by this vulnerability. Nevertheless, we've upgraded to a patched version to be on the safe side.
If you're at risk, especially if you're running Mongo publically on the internet, then you should also upgrade right away to one of these patched versions:
If you're running Mongo on Cycle with public internet DISABLED, then you're most likely fine, but we still urge you to upgrade just to be safe.
Read more about the CVE here, and feel free to reach out to our team if you have any questions/concerns we can help with.
Hey all, today we are unable to log into the Cycle Portal, because of an error on the login page. I've attached a screenshot showing 401 HTTP errors and a (maybe resulting)JS error.
Hey team, I'd love to see readiness checks added to stack! While the LBs do a good job of assessing latency for packets; they truly can't tell if a a container is in trouble and 'just needs a moment to process/recover'. A readiness check is a method to tell the deployment manager (don't reboot me, but I need a second, stop talking to me). The readiness check is separate from the health check (which is really a liveness check) - as it purely indicates if the instance can serve traffic at the moment.
We all need a moment to compose ourselves sometimes, so do our instances.. Give them a fighting chance!
For LB containers/instances, please add in the source IP address (seen as CF-CONNECTING-IP) so that we can source the original IP of inbound connections in LB logs. The current logs limit us to seeing a proxy IP address (which is always CloudFlare on certain IPs) and when watching LB logs, it would be nice to see both the proxy IP address as well as the source IP.
See https://developers.cloudflare.com/fundamentals/reference/http-headers/ for more information on CloudFlare headers.
Please add a /health or /status endpoint to the Cycle.io API that returns the operational status of the service. This would enable proper health checking and monitoring for applications that integrate with Cycle.io.
Proposed endpoint: GET https://api.<customer_id>.cycle.io/health
Expected response:
{ "status": "ok", "timestamp": "2025-10-17T17:00:00Z" }
Use case: This endpoint would allow our services to implement readiness probes that verify Cycle.io API availability before accepting traffic, improving reliability and enabling circuit breaker patterns for graceful degradation when the API is unavailable.
HTTP status codes:
Would be nice to have a feature to verify that RAID configurations were set up properly during deployment
A handy feature would be a BASIC AUTH option on a web end point/load balancer. on nginx you would do something
server {
listen 80;
server_name your_domain.com;
location / {
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
Rather than have to deploy nginx into a cycle env and proxy all traffic via it just to put basic auth, it would be nice to have a "not intended for production use" option on an environments load balancer/firewall to do basic auth.
Two choices would be available:
and finally a simple gui to add basic auth users..
Please consider adding a compress option to log drain form in environment settings panel.
Reference documentation here.
From my initial observation, compressed request bodies are unusual in HTTP traffic, but not impossible. When sending the request with compressed body, client must trust that the server is able to decompress the request body data. Server can decompress request body data based on Content-Encoding header sent by client, i.e.: Content-Encoding: gzip
Cycle agent pushes logs in a format, that is highly compressible (NDJSON). Client or, in Cycle case, Agent side compression may reduce network traffic for logs by 10x and more.
Example curl for compressed request body:
# compress json data
gzip -9 -c body.json > body.gz
# send compressed data
curl -v --location 'http://my.endpoint.example.com' --header 'Content-Type: text/plain' --header 'Content-Encoding: gzip' --data-binary @body.gz
If destination server does not support request decompression, apache httpd can do it with the following directives:
LoadModule deflate_module modules/mod_deflate.so
SetInputFilter DEFLATE
ProxyPass "/" "http://upstream/"
Please add auth option for external log drain requests. That way we can protect our log ingest endpoint by allowing only authorized agents.
Reference documentation here.
Proposed solution:
Example:
auth field contains value Basic YWRtaW46cGFzc3dvcmQ=, results in a header Authorization: Basic YWRtaW46cGFzc3dvcmQ=
This also allows for other types of auth, like Bearer and Api-Key tokens.
Please add environment_identifier to exported logs so we can have a name instead of hash for switching between environment log views in our Grafana log dashboard.
Reference documentation here.
Proposed fields:
The value is the same as identifier field in environment settings page.
Example NDJSON raw request body:
{
"time": "2025-08-07T11:11:11.12345678Z",
"source": "stdout",
"message": "some log message",
"instance_id": "instanceid",
"environment_id": "environmentid",
"environment_identifier": "my-environment", <---- please add this
"container_id": "containerid",
"container_identifier": "my-container",
"container_deployment": "my-deployment",
"server_id": "serverid"
}
*the json in example above was formatted for convenience. NDJSON body actually contains one json object per line, each representing a log message.
Hey All! I love the automatic lockdown on SFTP as it seems like bots are crazier than ever these days; however, I'm having trouble seeing when my server is in lockdown and when it's out without reconciling with the activity event log. Would it be possible to make this change to the portal, where it is easy to tell the state of SFTP (Locked Down vs Not Locked Down)?
Hey Everyone!
Many of you have reported hearing that different providers are being affected by outages.
So far we've heard reports of:
These have been corroborated by our team via down detector and other avenues, but other than things like Google Meets not loading we are not hearing of major interruptions to compute nodes running Cycle.
If you are having an issue with your compute, definitely let us know as we want to share that information within our ecosystem as much as possible and help each other.
If you go through this week and haven't even had to think about the word outage, consider posting something on LinkedIN about it and tag our official page.
We use first-party cookies to keep the site fast and secure, see which pages need improved, and remember little things to make your experience better. For more information, read our Privacy Policy.