Skip to main content
Published on

Cloud Carriers and Cloud Providers: Enemies, Friends or “Frenemies”?


Any given cloud user is dependent upon both the cloud provider running the data center that is hosting the application, and one (or more) network service providers or carriers used to reach the cloud. If the cloud is “down,” meaning the user can’t access the cloud application, whose fault is it? Who is responsible for fixing the problem and who does the user complain to? If we aren’t careful, problems can quickly turn into a finger-pointing exercise that makes no one happy.

One thought is to set up a clear contractual relationship between the carrier and provider. The cloud provider buys access to the users from the carrier. The carrier, in turn, provides a traditional service-level agreement (SLA) to the cloud provider covering availability, loss, latency and jitter measured from locations in close proximity to the user and the data center. This establishes a clear division of responsibility and clean, measureable criteria for determining whether the carrier is living up to the terms and conditions. However, it doesn’t fully solve the finger-pointing problem, because even if the data center and the servers are working fine, and the carrier connections to the data centers are running perfectly, users still may not be able to access the cloud.

Another possibility is to eliminate the distinction by having the cloud carrier buy a cloud provider, as Verizon bought Terremark, or the other way around, as Google bought their own fiber. This way, the cloud customer can buy cloud services from a single organization running both the data centers and the network. This reduces any finger pointing from the customer, but if the organization is internally siloed, the same old problems can persist. Besides, the number of cloud carriers in relation to the number of cloud providers is such that no single enterprise can be large enough to own everything, not even Google.

The way out of this mess is for cloud carriers and cloud providers to collaborate and move up the stack. They need to test the service as the users see it, rather than the circuits as the carriers provide them, and also test the virtual machines in the data center as the cloud provider manages them. If the service being stored is cloud storage, the storage should be tested. If the service is a platform for running websites, the websites should be tested. If the service is an application like email, the application must be tested. However, these tests should not be local tests from the data center; in order to test the service effectively, testing must be performed from locations as close as possible to users. These locations are buried deep within the carrier’s network, out-of-reach to cloud providers.

To make the collaborative effort work, new kinds of business relationships will probably need to be established between cloud carriers and cloud providers, i.e., relationships that are cooperative as opposed to the adversarial seller/buyer relationships governed by SLAs. For instance, relationships in which the carrier accepts responsibility for testing the service the cloud provider sells, and in which the carrier and provider share test results honestly and in real time. In such relationships, both organizations’ operations staffs would work together to solve problems.

So, how do we evolve to this collaborative model in which cloud carriers and providers work together? The obstacles are not technical, as effective solutions are already available from EXFO and others. The answer lies in cloud carriers and cloud providers getting together to establish a viable business model enabling them to cooperate effectively. Business partnerships are based on equal relations, in which the parties involved are neither friends nor enemies.

If you'd like to learn more about testing cloud services, you can read Testing the Cloud white paper. You can also watch the Network Forecast - Mostly Cloudy webinar that I've recently co-hosted with my colleague Bruno Giguère  about the cloud and its many benefits, including reduced capital and elimination of the time, space, power and cost constraints that plague the traditional computing environment.