Previous incidents

August 2024
Aug 28, 2024
1 incident

Predictions not running on A40s

Downtime

Resolved Aug 28 at 06:11am UTC

A40 workloads are running again. We're continuing to monitor and investigate the underlying cause.

1 previous update

Aug 21, 2024
1 incident

Streaming service degraded for A100s

Degraded

Resolved Aug 21 at 11:04am UTC

We believe these problems have now been resolved. Please contact us if you are still seeing issues with streaming from Europe.

2 previous updates

Aug 09, 2024
1 incident

A40s degraded

Degraded

Resolved Aug 09 at 03:58pm UTC

A40 behavior has been stable for some time now. All systems are green.

1 previous update

July 2024
Jul 25, 2024
1 incident

Llama3-70b-chat Delays

Degraded

Resolved Jul 25 at 11:44pm UTC

This has been resolved and predictions should be handled normally.

2 previous updates

Jul 17, 2024
1 incident

Predictions on trained versions not starting

Degraded

Resolved Jul 17 at 04:36pm UTC

We've fixed the issue and predictions on trained versions are running again.

1 previous update

Jul 16, 2024
1 incident

Intermittent issues affecting some hardware types

Degraded

Resolved Jul 16 at 08:16pm UTC

Things are running normally as of about 15 minutes ago.

2 previous updates

Jul 09, 2024
1 incident

API degradation

Degraded

Resolved Jul 09 at 12:15pm UTC

Service has been restored. Thanks for your patience!

2 previous updates

Jul 03, 2024
1 incident

Llama 3 70b instruct model not processing predictions

Degraded

Resolved Jul 03 at 11:11am UTC

The model is processing predictions properly again, and the queue is empty.

1 previous update

June 2024
Jun 21, 2024
1 incident

Some models unavailable

Degraded

Resolved Jun 21 at 03:40pm UTC

Service has been restored as of a few minutes ago.

1 previous update

Jun 20, 2024
1 incident

Errors publishing model versions

Degraded

Resolved Jun 20 at 10:41pm UTC

Model version publishing is now working as expected.

1 previous update

Jun 04, 2024
1 incident

Errors with inference

Degraded

Resolved Jun 04 at 12:36am UTC

The issues with inference was limited to select LLM models. At this time the problematic code has been rolled back and all inference should be operating normally at this time.

1 previous update