Showing posts with label service provider. Show all posts
Showing posts with label service provider. Show all posts

Friday, July 24, 2015

The Future of Broadband CPE: Part I


At Stake: Who Controls the entire home, the service provider or the web company?
The network-terminating CPE device provided by the access network service provider is at an inflection point: it’s at the intersection of service providers’ business drivers and emerging technologies. What’s at stake is control of the entire home and all the revenue generating up-sell opportunities, including emerging Internet of Things services. Access network service providers must decipher this paradigm or risk being usurped by the web companies.
Customer Premise Equipment or CPE historically meant customer owned equipment. In the case of a T1 circuit the service provider would terminate the network with a CSU/DSU and would connect to the customer-owned access router. If there was an issue, the SP would perform a loop-back test to the CSU/DSU and if it passed the test they were done with support. The same is true with legacy home telephony. If there’s a dial tone at the Network Interface Device, the gray box attached to the outside your home, the telco is “done.” If you still have issues beyond that it’s your home wiring, which the ILEC’s no longer manage (for free anyway).
In the early days of broadband, the service provider, telco and cable company would terminate their connections with a DSL or cable modem. The premise facing interface was Ethernet (Layer 2 interface). When consumers wanted to connect more than one device to the Internet they would acquire a Wi-Fi router (Layer 3) through the retail channel.
Today, service providers are combining the modem functionality with the Wi-Fi routing functionality in to a single device. Interesting to note is the SP is taking ownership of the Wi-Fi network, something historically they were loathed to do and could not do for regulatory reasons. The more functionality an SP takes ownership of the more they are responsible for. This leads to the inevitable increase in help calls.
Competition is forcing them take ownership of the total customer experience. A poor experience combined with lackluster customer support is the number one reason for customer churn. Now the CPE or broadband gateway is taking on the dual role of terminating the network and controlling the home network and ultimately the devices and things in the home.
This is not without precedence. The set-top box has always had this dual personality. It terminated the SP’s video network and controlled the home video experience. This is even more prevalent with whole home DVRs. As far as cable companies were concerned the STB was part of the network when it was convenient and CPE when that was convenient.
Now and in the future the SP provided CPE device needs to do two things well. First, it must terminate the access network (Layers 1 and 2), hence the term “network terminating CPE”. Second, it must control and manage the entire home experience (Layers 3-7+). It can and will do the network terminating part well, but it also MUST do the home experience well or risk churn where competition exist or having a web company usurp them.
In future articles I will address numerous issues including:
1. Virtualization Options and Realities
2. IoT and Smart Home Implications
3. Distribution of Intelligence (Cloud, Network and CPE)
4. Distribution of Intelligence (CO/HE, Outside Plant and CPE)
5. Wi-Fi & LTE Convergence
6. Business Models, Value Chains and the N-Dimensional Ecosystem Dynamics
If you would like us to help you navigate the future of broadband CPE industry-wide dynamics and opportunities contact Greg Whelan at gwhelan@greywale.com

Friday, June 26, 2015

Broadband Regulations: Be Careful What You Wish For!

Regulations are a critical factor in the access network. Unlike the “rest of the network” the access network is burdened with federal, state and local regulations and this is only getting worse. I’ve written extensively in the past that net neutrality is a bad idea and that Title II is a gigabit killer.

Why is regulation bad for everyone, including Google? The regulated monopoly “phone companies” depreciated equipment over 30 years. With asset-based pricing regulations you want to keep your asset base as high as possible. Thus, the innovation cycle of the regulated voice industry was 30 years. In the unregulated data networking industry the desired depreciation cycle is five to seven years with three to five years being a more common life span of equipment. Thus, the innovation cycle is three to five years. Today, service providers want to accelerate their innovation cycle to less than one year and ideally three to four months to be more competitive with the “web companies” such as Google and Facebook.
Until recently the net neutrality debate was focused on adverse traffic impacts such a throttling P2P traffic. It’s widely reported that as few as 10 percent of users consume upwards of 80 percent of capacity. The numbers have changed with the proliferation of streaming video but the issue remains. Mobile network operators have solved this problem with data caps. They also have program where web companies can pay so their traffic doesn’t count against subscribers’ data caps. (This may be illegal soon as well.) When an analogous program (for example, paid fast lane) was implemented in the broadband access market there was outrage.
Traditional content delivery networks (CDNs) can bypass much of the public Internet to improve quality of service. Companies that want to provide a better user experience can use CDNs and cache their content in select Tier 1 locations across the country. This helps; however, from the Tier 1 cache to the user is best-effort delivery. Once the traffic enters the local exchange carriers’ (LEC) network in a large metropolitan area the “last 50” miles are best effort.
With this model OTT companies cannot ensure the quality of their service. Why shouldn’t they be able to pay the LEC for better traffic treatment? The argument is that this benefits the large companies at the detriment of start-up companies. It’s just another challenge innovative start-ups must overcome. This actually benefits consumers as only those companies with a compelling offering will make it over the hurdle. Marginal companies with a marginal offering won’t flood the market and the network with garbage. This is a good thing. Isn’t the FCC all about protecting the consumer?
Can capitalism and the free market address the issue of a “digital divide”? Yes, a case in point is Comcast in the Boston area. The company offers $10/month broadband service to any family that has children on the free or subsidized school lunch program in the city of Boston. No laws, no regulations just a solid business driven move by Comcast.
Service providers have invested billions of dollars deploying and managing broadband networks. Data rates have continuously increased. Gigabit networks are being deployed around the world by a range of companies and organizations. The free market is driving them. It’s counter intuitive to expect them to spend limited CAPEX if their return on investment is regulated or uncertain. Today, regulators are faced with conflicting priorities. On one hand they want to spur gigabit investments but on the other hand they want to regulate broadband access. It’s obvious that you can’t get both.To repeat: Title II is a gigabit killer.

Friday, February 6, 2015

Virtualization: There’s got to be more!

Virtualization: There’s got to be more!  

Porting to Intel and Virtual Machines is a technical implementation detail not a business solution.  

When vendors are asked about their virtualization strategy you often hear a common answer.  They say they’re “virtual” since they ported their software to Intel.   What’s the value proposition?  Porting to Intel isn’t it. Sure it reduces CAPEX, at the expense of performance.   All it really does is shift industry revenue, power and influence from the Broadcoms of the world to Intel.  Plus, we now know that if CAPEX goes to ZERO less than 33% of CxO’s top of mind business problems are solved.  When pressed for the rest of their virtualization strategy they say the run on virtual machines in a data center.  OK, and then what? 

Let’s assume they perform three functions called A, B and C.  They port them to Intel and then run them on virtual machines (VMs).    Shifting revenue from Broadcom to Intel is a technical implementation detail and not a business solution.  Service providers should be thinking there’s got to be more



The logical question to ask is whether A, B and C are the right functions in the virtual world.  Just because they were required in yesterday’s environment does not mean they are required in the virtual world.   Do you really need 20% of A and 60% of B?  Do you really need 150% of C?  You get the picture.
Service providers will be spending billions of dollars moving to the virtual world they should be asking themselves, why?   Sure there’s a benefit to take the A’s, B’s and C’s of today’s world to virtual machines.  But is it really enough?  This is a one in a lifetime transformation and a fight for ultimate survival.  SP’s need to ask for more. 


Vendors on the other hand need to be asking themselves similar questions.  Is porting to Intel enough?  What can we do that’s game changing in the virtual world?  The answer, IMHO, to the first question is No Way.  The answer to the second question depends on the vendor’s core competencies, ecosystem presence, business strategy et al.    The good news is that ACG Research can help you answer this second question.  Give us a shout.  



Monday, June 30, 2014

Does Anyone Doubt IoT is at the Peak of the Hype Curve?


Yikes!  That's all I need to say about the excessive hype of anything and everything IoT these days.  From the connected refrigerator, the connected car, wearables, et al the hype in this market is out of control.  Every industry leader from Cisco, Microsoft, Intel, Facebook, Google, Apple, Amazon is staking their claim as the industry thought leader.  The same is true for hundreds of smaller companies.  The only thing clear is that IoT is at the peak of the hype curve.


M2M (Machine-to-Machine) applications have been around for decades and have been and are quite successful.  Many are based on industry standard protocols and millions of "things" are connected via the cellular network.  Nothing new here.  Remote sensors connected via some network to a centralized location where the sensor's data is aggregate, analyzed and acted upon.

Once the hype "bubble" crashes many real markets will be widely successful.  You will know what markets they are since they will not include the acronym IoT in their description.


 .

Tuesday, June 10, 2014

IoT Success: Batteries & Backhaul...& Transparency

The Internet of Things (IoT) is riding high at the peak of the Hype Cycle.  Is it a $17 Trillion market or merely a $10 Trillion market?  Depends on what you include in your definition of a "thing".  The more you include the bigger the market.  

IoT applications have a basic common architecture as shown in Figure 1. IoT digitizes some analog parameter and sends it to the "cloud" for analysis and possible action.   The primary factors that all IoT or M2M applications must address are Batteries, Backhaul and Transparency.  
 

Let's address the transparency issue first.  In quantum physics there's the "uncertainty principle".  Simply put is says that whenever you measure a system you disturb it.  Since most IoT applications measure a real world analog phenomena (e.g., temperature, pressures, et al) the "thing" must do so with minimal impact on the system you are measuring.   Transparency parameters include cost (CAPEX and OPEX), size, weight, aesthetics etc. 

Batteries, or more generally power, is a critical parameter within IoT applications.  If you require grid power you lose some transparency and limit your ability to deploy the thing.  Not every location will be close enough to the grid to be able to be powered by it.  Remote sensors will require batteries.  These batteries must last many months and even many years.  

Even IoT applications within the home must address the battery issue.  Take a simple motion detector.  The ideal placement is in the corner of the room near the ceiling.  Not many power outlets near by.  Thus the customer can either install an outlet close by, move the sensor close to the outlet (i.e., near the floor) or have to see the wire dropping down to the nearest outlet.  Batteries solve this problem.  However, if they need to be replaced every month the value and transparency quickly depreciates.  What if this motion detector is part of a security perimeter for a high value asset (e.g., power plant).  If the good guys need to replace the batteries periodically it will show the bad guys where these "hidden" sensors are.  Thus, batteries are critical to the success of the application and can make or break a business case.  

The "I" in "IoT" is for "Internet", meaning internet protocols (IP).  The digital data of the analog phenomena must be sent to the cloud via some type of network. This is referred to as Backhaul.  Networking options are plentiful  and include 2G/3G/4G/LTE, Wi-Fi, Zig-Bee, Satellite, Blue Tooth, Ethernet and local broadband options.  The technology selected depends on the application and on parameters such as data rates, latency, cost and what's available.  The more remote the thing is the less options are likely available.  The selection of a backhaul solution must address both transparency and battery issues discussed above.  

There are other issues and parameters that need to be address to make an IoT application successful.  For example, the cloud solution (e.g., "big data" base, analytics, heuristics, et al) are not trivial yet they are solvable engineering problems.  The same is true for the "thing" or sensor. For most all you have to do is go to the Analog Device catalog and select a chip.  Again, non trivial but solvable.  Thus, batteries and backhaul and transparency are critical make-or-break parameters to ensure success of your IoT application.


To discuss this please email me at gwhelan@greywale.com

Other articles can be found at greywale.com




Wednesday, June 4, 2014

The Next Cord Cutting: Real Cord Cutting


Today "cord cutting" refers to consumers who stop paying for TV and go broadband only from the cable or telecom company.  This is more accurately called "cord shaving".  The economic impact is significant but it's more of a redistribution.  More money to "Netflix" and less to the service provider for video.  Yet, more to the latter for higher capacity broadband that provides higher margins.

Tomorrow's "Real Cord Cutting" refers to consumers who completely stop all services from a wired service provider.  The go completely wireless.  We've seen the prequel with the elimination of a "home phone".  This next generation cord cutting has consumers relying on their 4G/LTE service for all broadband services.  This can be accomplished by simply turning their smart phone into a Wi-Fi access point when in their home.  The economic impacts of next generation real cord cutting are severe. The fixed access service provides not only lose all service revenues they lose customers entirely.

As 4G/LTE deployments expand and as more small cells get deployed the average bandwidth per device will increase substantially.  When the Netflix threshold (e.g., when the quality of streaming video is acceptable) is only a matter of time.  Slowing down real cord cutting will be the price of mobile data plans which will eliminate the intended savings in the first place.   Service providers with wireless assets will be in a strong position to succeed in this future scenario.  Other, such as cable MSOs will need to address their pricing plans which are driving customers away in the first place.  They can also compete with unique content, primarily "Sports and Wars", (i,e., live programming) and push for better quality video such as emerging 4K technologies.

Today's cord cutting is growing significantly especially in the under 30 demographic.  Tomorrow's real cord cutting will occur and will have substantial economic disruption for the entire ecosystem.


For past article please visit greywale.com 


Wednesday, May 21, 2014

Open Internet + Fast Lane: Win for Consumers: Yet Trust but Verify


The recent move by the FCC is a win for Consumers.  Yet, it's important the FCC "Trust but Verify".

Let's look at who will be the winners with the new FCC rules.  The Consumers.  Consumers will be the ultimate winners.  First, the ISPs will get a fair return on their capital investments and will have the incentive to invest in more bandwidth. Which enables wave after wave of innovation.   Second, those companies that pay for the fast lane will be those that consumers want and have a willingness to pay for.  Netflix for example.

Keep in mind that the FCC proposal STILL prohibits the ISP for degrading traffic.  Thus, those consumer applications in high demand get preferred treatment for the last 50 miles of the network (CDN Cache to home) and all other traffic gets treated the same it's always been.

The argument that small start up companies will be disadvantages is hollow.  It will force entrepreneurs to innovate more to deliver a compelling product to consumers.  It's just another market force to overcome.  It will raise the bar and eliminate the marginal applications from clogging the network.   This is no different than supermarket shelf space, a large barrier to entry.  Coke and Pepsi dominate.  Yet, look at all the upstart beverage companies that keep gaining shelf space.  They're doing this by creating innovative products that consumers want.  Not by whining to some federal regulator.

Why do industry pundits complain when Verizon, AT&T, Comcast, et al get  fair return on their investment and look the other way when Google, Amazon, et al make $1000's per second? 

Therefore, Consumers are the big winner here.  A) More bandwidth B) More innovation and C) Less marginal applications.

Given that the FCC is charted, via the Congress, to protect consumers this new Fast Lane approach is a step in the right direct.  However, the service providers must be careful not to over use this opportunity.  Hence, the "Trust But Verify" mantra.

Telco's and Cableco's must know that the FCC will be closely monitoring this new ruling (assuming it gets implemented).  They must adopt a high level of transparency to eliminate complaints from consumers.  Remember, all it takes is some savvy lawyer to get a single citizen to file a complaint.  To prevent endless litigation and legal costs that effectively eliminate the economic value to the "fast lane",  service providers need to provide this high level of transparency to avoid an FCC mandated higher level of transparency.

SPs should freely adopt a level of transparency that satisfies consumers and their advocates and limits the level of proprietary disclosure to their competitors.  They should ensure the "fast lane" does not, by design,  effectively harm all other traffic.  This can occur unwillingly using standard IETF IP Networking Protocols.

Therefore,  I believe the "Open Internet + Fast Lane" approach is worth implementing. It ultimately benefits the consumer and it's fair to the service providers.  Yet, and a "BIG YET", the FCC must Trust and Verify.

To comment on this or to discuss this in more detail please contact me at gwhelan@greywale.com

For additional articles and analysis please visit www.greywale.com

Tuesday, April 15, 2014

Is VoLTE Worth the Investment?


Mobile network operators across the globe are moving to deploying VoLTE (Voice over LTE) systems.  The reasons for this are as expected.  They include:
  1. Better voice quality
  2. Protect voice revenues
  3. Leverage IMS investment
  4. Be able to provide billing services (Leverage their billing system)
  5. Migrate 2G and 3G voice services to LTE


However, given the successes of over-the-top (OTT) services over wired broadband and in current wireless networks is this billion dollar investment prudent?  Consider the following:
  1. OTT has won, or is winning, the battles.  Skype and Netflix are two good examples.  WhatsApp is another case in point of OTT success. 
  2. OTT services are “good enough”.  The majority of the market is unwilling to pay for QoS when a free, or near free, service is sufficient.
  3. When a consumer experiences poor quality for an OTT service they blame the service provider and not the OTT provider.
  4. QoS can only be guarantee when the “call” is completely on-net.  As soon as the voice call leaves the originating SP all bets are off.  Why spend the $ billions only for a subset of calls?
  5. Voice revenue and now SMS Text revenues are crashing.   Why spend $ billions to chase a losing battle?
  6. Service providers are moving away from call-based billing (i.e., CDRs (Call Detail Records)).  Although the NSA loves them.  Unlimited calls and texts are the norm. 
  7. Data limits are prevalent.  A few Youtube videos or one Netflix movie will dwarf a month’s worth of phone calls from a data usage perspective. 
  8. When Google deploys their fiber optic networks (e.g., Kansas City) they do not offer voice services.  The reason is to avoid the mountains of regulations required when offering voice services.    Does that mean people in Kansas City don’t make voice calls?  

It is understandable why a service provider with decades of legacy voice experience would want to consider VoLTE.  After all, they have decades of legacy voice experience.   Similarly, it’s not surprising that service providers that have spent $ billions and years deploying and perfecting IMS want to leverage that investment in time, money and careers.  It’s difficult to face reality that IMS is a “sunk cost” and therefore should not be factored in when evaluating VoLTE investments.

The OTT trends and successes cannot be refuted.  Service providers continue to face the fact that they cannot compete effectively against every segment of OTT services.  The $ billions they would spend on VoLTE would be better spent on: 
  1. Increasing bandwidth per subscriber
  2. Providing industry leading network security to protect their subscribers
  3. Fighting the short sighted Net-Neutrality laws that make regulators “feel good” at the expense of long term viable markets.
  4. Creating an infrastructure where OTT’s want to pay them for QoS in a fair, open and non-discriminatory manner. 

I’ve spent 20 years working on technologies with the goal to ensure service providers do not become a “dumb pipe”1.   I am fully biased toward ensure the success of SPs and believe that “net neutrality” is unfair to them2.  However, SPs should not spend $ billions on VoLTE just because it’s voice.  High speed, high quality broadband is the future.  Don’t fight it.

To discuss this please contact me at gwhelan@greywale.com


 Notes:       
  1.  http://greywhalemanagement.blogspot.com/2012/07/if-sps-become-dumb-pipe-everybody-loses.html
  2.  http://greywale.blogspot.com/2014/02/net-neutrality-overruled-win-for.html


 Click here for an INDEX of Articles and Post

Wednesday, September 18, 2013

A Telco Energy Strategy Should Demand ZERO Impact on Service


 As energy strategies reach the boardroom, service provider management should insist on “zero-impact” on services.  The stakes are too high in the competitive zero-sum game they participate in.  Customer satisfaction, reduced churn and a strong brand are paramount in this environment.  By treating energy as a strategic initiative they will achieve the benefits of lower OPEX, enhance brand and more efficient end-to-end operations.  Tactical energy initiatives will not get funded if they have a perceivable adverse effect on consumer and business services.  These adverse effects could be short lived, as during installation, or long term, if, for example, latency is introduced.   Thus, their energy strategy should demand zero impact on services.

Note the emphasis on “services” instead of “network”.    It would be unreasonable to demand zero impact on the network if you are deploying a new architecture or energy aware protocol.  Yet, with IP (Internet Protocol) the impact on the network should not cause the perceivable impact on services. 

Is zero-impact unreasonable and wouldn’t “minimal impact” be a better goal?  The challenge here would be to define what “minimal” means?  Would it mean X amount of video anomalies per 30 minutes?  Why not X+1?  Would it mean Y dropped calls/tower/minute?  Why not Y+1?  Also, who defines X and Y? Would the CEO, CTO, or CMO define them?  Would international standards organizations set them? 

 Setting the goal of “Zero Impact” sends a clear message throughout the organization of what is expected.  Terms such as “sustainability” and “green” will have clearer meaning.  Green projects that make people feel good but have no financial justification will fail fast so the real winners can progress.  Therefore, telcos and service providers should demand Zero Impact on services.

Contact: Greg Whelan at gwhelan@greywale.com to discuss.

Click here for an INDEX of Articles and Post


Friday, July 19, 2013

Service Provider Energy Efficiency (SPEE) is Inevitable!

Network traffic is increasing exponentially and energy efficiency is increasing linearly.  Yet, network engineers are focused on, and measured on, scalability and availability.  They don’t see their energy expenses.   However, given the simple math of exponential verse linear growth it is inevitable that energy efficiency and energy management will be a primary driver in the near future.