All Systems Operational
Desk Operational
90 days ago
99.75 % uptime
Today
Take Blip Platform Operational
90 days ago
99.92 % uptime
Today
CRM ? Operational
90 days ago
100.0 % uptime
Today
Core Operational
90 days ago
99.79 % uptime
Today
Analytics Operational
90 days ago
100.0 % uptime
Today
Artificial Intelligence Operational
90 days ago
99.93 % uptime
Today
Portal Operational
90 days ago
99.84 % uptime
Today
Cloud Infrastructure Operational
90 days ago
100.0 % uptime
Today
Channels Operational
90 days ago
99.99 % uptime
Today
WhatsApp ? Operational
90 days ago
99.98 % uptime
Today
Telegram Operational
90 days ago
100.0 % uptime
Today
Messenger Operational
90 days ago
100.0 % uptime
Today
BlipChat Operational
90 days ago
100.0 % uptime
Today
Workplace Chat Operational
BusinessChat Operational
Skype Operational
E-mail Operational
Hosting Enterprise Operational
90 days ago
100.0 % uptime
Today
Bot Builder Operational
90 days ago
100.0 % uptime
Today
Bot Router Operational
90 days ago
100.0 % uptime
Today
Hosting Business Operational
90 days ago
100.0 % uptime
Today
Bot Builder Operational
90 days ago
100.0 % uptime
Today
Bot Router Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Mar 8, 2021

No incidents reported today.

Mar 7, 2021

No incidents reported.

Mar 6, 2021

No incidents reported.

Mar 5, 2021

No incidents reported.

Mar 4, 2021

No incidents reported.

Mar 3, 2021

No incidents reported.

Mar 2, 2021

No incidents reported.

Mar 1, 2021

No incidents reported.

Feb 28, 2021

No incidents reported.

Feb 27, 2021

No incidents reported.

Feb 26, 2021

No incidents reported.

Feb 25, 2021

No incidents reported.

Feb 24, 2021
Resolved - After the actions taken, the scenario was normalized and the postmortem will be filled with more information about the root cause.
Feb 24, 12:24 GMT-03:00
Monitoring - Identified scenario: High CPU consumption was identified in our transactional database responsible for processing the history of messages displayed on the Desk tool, in two of the nodes out of a total of 10 available / shared nodes for all clusters.

Impact for the client: Slowness in the display of messages inside the Desk and also impacts on some analytical reports.

Actions in progress:

1) Rolled back of a scheduled update made yesterday (02/23/2021) in the evening, in order to isolate the scenario, since the update in question could reflect on the identified scenario;

2) After the Rollback scenario was isolated because it did not have the expected effect, it was concluded that in fact it was not related, the drain (restart of the service) was performed on one of the nodes of the database server, as there was a process of Repair this machine. Then, we restart applications that make use of the transactional database, but only applications that readed in the database are back to normal. For control over the written part, the technical team reduced the number of consumers of writing in the transactional database for objective to decrease access to the database.

After the second action taken, we have already seen a decrease in failures and consequently also an improvement in the processing of the history of messages displayed on the Desk service. The technical team continues to act on the case.

NOTE: It is worth mentioning that the functionality of counting active / sent / received messages within the Analytics screen is not occurring in real time.
Feb 24, 10:54 GMT-03:00
Identified - The issue has been identified and we are working to get it fixed as soon as possible.
We'll update the status as soon as we have news.
Feb 24, 09:01 GMT-03:00
Update - We are continuing to investigate this issue. We'll update the status as soon as we have news.
Feb 24, 08:47 GMT-03:00
Investigating - We identified slowness in the message search commands for display in the Desk tool, with failure or delay in the display of messages to the attendant.
The team is working to normalize the environment
Feb 24, 08:22 GMT-03:00
Feb 23, 2021
Postmortem - Read details
Feb 23, 22:29 GMT-03:00
Resolved - We inform that today (02/23/2021), at around 05:00 am in scheduled update we had an upload of a release in the http module on one of our servers in a cluster composed of 6 servers.
An anomalous behavior was observed by the monitoring on the server that was updated in the Http module, however, due to the fact that it is an isolated scenario, it was difficult to relate the identified failure with the alteration made early in the morning. After some analyzes and evaluations together with the technical team responsible for the platform, it was found that the errors were due to the update mentioned above.

Impact to the customer:

Failed to exchange messages on bots using http requests;
Http requests using “https://msging.net”;
In the display of the number of tickets in the attendance tool for the attendant, as it is done via http request.


Correction Applied:

The update was rollback to the referred server.

Start date / time: 23/02/2021 05:13
End date / time: 2/23/2021 7:45 PM
Feb 23, 22:24 GMT-03:00
Identified - We are experiencing a degradation in our core for HTTP requests, so some http requests and commands can failed
Feb 23, 05:13 GMT-03:00
Feb 22, 2021
Resolved - The scale up of the database was successfully completed, thus resolving the slowness in AI queries within the bots.

We will be monitoring the work done.

We will inform you of any news

Start date / time: 22/02/2021 11:30
End date / time: 22/02/2021 16:00
Feb 22, 16:00 GMT-03:00
Update - Fault identified:

Today around 11:30 in the morning an increase in the consumption of our AI database was observed through our monitoring, since the period we started the close monitoring, once we performed the Scale Up of the Database during the operation over the course of the day it could have even more impact. The objective was to follow up so that the DBA action could be carried out at night, however, now at around 14:50 the consumption of the database has reached the maximum limit and it is necessary to start the Scale Up action during the day.

Reflection for the client:

Only customers who use AI in smart contact are being impacted. The slowness is noticed when using the AI ​​in the Bot flow.

Correction applied:
When consumption reached 100% of use, Scale UP of the database was started immediately.

Root Cause: AI Database High CPU Consumption

Start date / time: 22/02/2021 11:30
Feb 22, 15:57 GMT-03:00
Identified - Scenario:

We are experiencing slowness in artificial intelligence queries, our DBA team is already working on the issue.
Feb 22, 11:30 GMT-03:00