• JoomlaWorks Simple Image Rotator
  • JoomlaWorks Simple Image Rotator
Telecom Blog Feeds
Dean Bubley's Disruptive Wireless
Dean Bubley's Disruptive Wireless: Thought-leading wireless industry analysis

  • 2nd Workshop on Enterprise/Private Cellular, December 1st, London
    NEW: Disruptive Analysis & Rethink Research joint workshop on Enterprise Cellular Networks, London, December 1st, 2017

    At the end of May, I co-ran a day-long workshop with Caroline Gabriel covering enterprise and private LTE, for industry verticals, neutral-hosts, indoors, government and other environments.

    We had c20 people involved, with a mix of presentations, group discussions and exercises, and ample time for networking. The event was held under Chatham House rules, so people could talk confidentially without direct attribution of comments.

    Six months later, we're repeating the workshop, on December 1st in Central London. A limited number of places are available.

    New mobile devices/applications and the emergence of the Industrial IoT means that high-quality ? often mission-critical ? networks are required for new systems and applications. These can span both on-premise coverage (eg at a port, factory, office, wind-farm or hospital) and the wide-area (eg for smart cities or future rail networks). 

    A lot has progressed in 2017, and I'm expecting 2018 to bring further developments:
    • Regulators in some markets have actively looked to provide frequency ranges for public safety, large businesses and other uses.
    • Every spectrum conference I've been to has had a session or two on shared bands, and is also taking unlicensed technologies more seriously.
    • Rules on CBRS have solidified in the US, albeit with possible changes proposed to the FCC. Many industries and major companies (eg the oil sector) are seriously engaged.
    • MulteFire is looking "real", with deployments (and devices) expected next year.
    • Vendors including Nokia, Ericsson and Huawei have all indicated growing interest in private cellular, as well as host of smaller players, or WiFi specialists looking to add cellular radios.
    • The 5G community is recognising that "verticals" may not always be best-addressed by traditional MNOs, and may require new models. Even most operators concede that they can't do everything - especially for industrial IoT connectivity or uses with heavy legal liability or certification requirements.
    • Various large industrial and utility/rail players have shown a lot of interest in private LTE, alongside WiFi meshes, LPWAN and other vertical-oriented network technologies.
    • Community, rural and emerging-market players have started to look at cellular in unlicensed/shared bands to reduce costs and improve coverage. 



    Workshop structure & Coverage

    The day will have a maximum of 30 attendees to ensure a high level of discussion and interaction. We expect a diverse mix of service providers, vendors, regulators and other interested parties such as enterprises, investors and developers. 

    It will be suitable for C-level executives, strategists, product management, marketing functions, CTO office, market analysts and regulatory affairs specialists.  

    It will be led by myself and Rethink Research?s Caroline Gabriel (link). We are both well-known industry figures, with many years of broad communications industry analysis ? and outspoken views ? between us.


     
    Topics to be discussed include:

    • Key market drivers: IoT, automation, mobile workers, vertical-specific operational and regulatory issues, indoor coverage, democratisation of wireless expertise
    • Spectrum-sharing, including unlicensed, light/local-licensing and CBRS-type models. What bands are different countries' regulators looking at? 2.6GHz, 3.5GHz, 4GHz, 28GHz, others?
    • Evolution of key enabling technologies such as MulteFire, 5G, NB-IoT, network-slicing, SDN, small cells, edge computing, and enterprise-grade IMS cores
    • Regulatory/policy issues: spectrum allocation, competition, roaming, repeaters, national infrastructure strategies and broader ?Industry 4.0? economic goals
    • The shifting roles of MVNOs, MVNEs, neutral hosts and future ?slice operators?
    • Numbering and identity: eSIM, multi-IMSI, MNC codes
    • How will voice & UC & push-to-talk work on private cellular networks?
    • Commercial impacts, new business model opportunities & threats to incumbents
    • Vendor dynamics: Existing network equipment vendors, enterprise solution providers, vertical wireless players, managed services companies, new industrial & Internet players (eg GE, Google), implications for BSS/OSS, impact of open-source
    (I've covered various of these themes in previous posts and presentations. If you want more detail about some of my thinking, see links here and here and here. We will be going into a lot more depth in the workshop itself. And for a quick 20-minute ?taster?, see the video of my presentation from the recent TADSummit event in Lisbon.

    The workshops will take place at the Westbury Hotel in Mayfair, central London [link]. It will run from 9am-5pm, with plenty of time for networking and interactive discussion. Come prepared to think and talk, as well as listen ? these are ?lean-forward? days. Coffee and lunch are included.

    The attendance fee is £699+VAT, with a discount for a second attendee. Payment can be made via credit card (see Paypal Buy Now button below), or purchase-order & invoice on request.  

    Note: sometimes Paypal can be a bit awkward, especially with corporate cards or accounts. Drop me an email if you experience any problems or for further details: 
    information AT disruptive-analysis DOT com


    Normal Price




  • Debunking the Network QoS myth
    Every few years, the network industry - vendors, operators & industry bodies - suffers a mass delusion: that there is a market for end-to-end network QoS for specific applications. The idea is that end-users, application developers - or ideally both - will pay telcos for prioritised/optimised connections of specific "quality", usually defined in terms of speed, latency & jitter (variability).

    I've watched it for at least a dozen years, usually in 3-year waves:
    • We had enterprise networks promising differentiated classes of service on VPNs or the corporate connection to the Internet. Avoid the impact of the marketing department watching cat videos!
    • We had countless failed iterations of the "turbo boost" button for broadband, fixed or mobile.
    • We had the never-realised "two-sided markets" for broadband, featuring APIs that developers would use to pay for "guaranteed QoS"
    • We had numerous cycles of pointless Net Neutrality arguments, talking about "paid prioritisation", a strawman of massive proportions. (Hint: no content or app developer has ever had lobbyists pleading for their right to buy QoS, only telcos asking to be able to sell it. Compare with, say, campaigns for marijuana decriminalisation).
    • We currently have 5G "network slicing" concepts, promising that future MNOs will be able to "sell a slice" to an enterprise, a car manufacturer, a city or whatever.
    • My long-standing colleague & interlocutor Martin Geddes is pitching a concept of app-focused engineering of networks, including stopping "over-delivering" broadband to best-efforts applications, thus forcing them to predict and right-size their demands on the network.
    In my view, most of these attempts will fail, especially when applied to last-mile Internet access technologies, and even more-especially to wireless/mobile access. There isn't, nor ever will be, a broad and open market for "end-to-end network QoS" for Internet applications. We are seeing network-aware applications accelerating much faster than application-aware networks. (See this paper I wrote 2 years ago - link).

    Where QoS works is where one organisation controls both ends of a connection AND also tightly-defines and controls the applications:
    • A fixed-broadband provider can protect IP telephony & IPTV on home broadband between central office & the home gateway.
    • An enterprise can build a private network & prioritise its most important application(s), plus maybe a connection to a public cloud or UCaaS service.
    • Mobile operators can tune a 4G network to prioritise VoLTE.
    • Telco core and transport networks can apply differential QoS to particular wholesale customers, or to their own various retail requirements (eg enterprise users' data vs. low-end consumers, or cell-site timing signals and backhaul vs. user data). 
    • Industrial process & control systems use a variety of special realtime connection protocols and networks. Vendors of "OT" (operational technology) tend to view IT/telecoms and TCP/IP as quaint. The IT/OT boundary is the real "edge".
    Typically these efforts are costly and complex (VoLTE was frequently-described as one of the hardest projects to implement by MNOs), make it hard to evolve the application rapidly because of dependencies on the network and testing requirements, and often have very limited or negative ROI. More importantly, they don't involve prioritising chunks of the public Internet - the telco-utopian "but Netflix will pay" story.

    There are a number of practical reasons why paid-QoS is a myth. And there's also a growing set of reasons why it won't exist (for the most part) in future either, as new techniques are applied to deal with variable/unpredictable networks.

    An incomplete list of reasons why Internet Access QoS isn't a "market" include:
    • Coverage. Networks aren't - and won't be - completely ubiquitous. Self-driving cars need to be able to work offline, whether in a basement car-park, during a network outage in a hurricane, or in the middle of a forest. The vehicle won't ask the cloud for permission to brake, even if it's got promised millisecond latency. Nobody pays for 99.99% access only 80% of the time.
    • The network can't accurately control or predict wireless effects at micro-scale, ie RF absorption or interference. It can minimise the damage (eg with MIMO, multiple antennas) or anticipate problems (weather forecast of rain = impact on mmWave signals).
    • End-user connections to applications generally go via local WiFi or LAN connections, which service providers cannot monitor or control.
    • No application developer wants to cut QoS deals with 800 different global operators, with different pricing & capabilities. (Or worse, 800 million different WiFi owners).
    • 5G, 4G, 3G and zero-G all coexist. There is no blanket coverage. Nobody will pay for slicing or QoS (if it works) on the small islands of 5G surrounded by an ocean of lesser networks.

    • "Applications" are usually mashups of dozens of separate components created by different companies. Ads, 3rd-party APIs, cloud components, JavaScript, chunks of data from CDNs, security layers and so on. Trying to map all of these to separate (but somehow linked) quality agreements is a combinatorial nightmare.
    • Devices and applications have multiple features and functions. A car manufacturer wouldn't want one slice, but ten - engine telemetry, TV for the kids in the back seat, assisted-driving, navigation, security updates, machine-vision uploads and so on all have very different requirements and business models.
    • Lots of IoT stuff is latency-insensitive. For an elevator maintenance company, a latency of a week is fine to see if the doors are sticking a bit, and an engineer needs to arrive a month earlier than scheduled.
    • I don't know exactly how "serverless computing" works but I suspect that - and future software/cloud iterations - take us even further from having apps asking the network for permission/quality on the fly. 
    • Multiple networks are becoming inevitable, whether they are bonded (eg SD-WANs or Apple's use of TCP Multipath), used in tandem for different functions (4G + SigFox combo chips), meshed in new ways, or linked to some sort of arbitrage function (multi-IMSI MVNOs, or dual-SIM/radio devices).  See also my piece on "Quasi-QoS" from last year (link)

    • Wider use of VPNs, proxies and encryption will mean the network can't unilaterally make decisions on Internet QoS, even if the laws allow it.
    • Increasing use of P2P technologies (or D2D devices) which don't involve service providers' control infrastructure at all.
    • Network APIs would probably have to be surfaced to developers via OS/browser functions. Which then means getting Apple, Google, Microsoft et al to act as some sort of "QoS storefront". Good luck with that.
    • No developer will pay for QoS when "normal" service is running fine. And when it isn't, the network has a pricing/delivery challenge when everyone tries to get premium QoS during congestion simultaneously. (I wrote about this in 2009 - link
    • Scaling the back-end systems for application/network QoS, to perhaps billions of transactions per second, is a non-starter. (Or wishful thinking, if you're a vendor).
    • There's probably some extra horribleness from GDPR privacy regulations in Europe and information-collection consent, which further complicates QoS as it's "processing". I'll leave that one to the lawyers, though.
    • It's anyone's guess what new attack-surfaces emerge from a more QoS-ified Internet. I can think of a few.
    But the bigger issue here is that application and device developers generally don't know or care about how networks work, or (in general) have any willingness to pay. Yes, there's a handful of exceptions - maybe mobile operators wanting timing sync for their femtocells, for example. Safety-critical communications obviously needs quality guarantees, but doesn't use the public Internet. Again, these link back to predictable applications and a willingness to engineer the connection specifically for them.

    But the usually-cited examples, such as videoconferencing providers, IoT specialists, car companies, AR/VR firms and so on are not a viable market for Internet QoS. They have other problems to solve, and many approaches to delivering "outcomes" for their users.

    A key issue is that "network performance" is not considered separately and independently. Many developers try to find a balance between network usage and other variables such as battery life / power consumption together. They also think about other constraints - CPU and screen limitations, user behaviour and psychology, the costs of cloud storage/compute, device OS variations and updates, and so on. So for instance, an app might choose a given video codec based on what it estimates about available network bandwidth, plus what it knows about the user, battery and so on. It's a multi-variable problem, not just "how can the network offer better quality".




    Linked to this is analytics, machine learning and AI. There are huge advances in tuning applications (or connection-managers) to deal with network limitations, whether that relates to performance, cost or battery use. Applications can watch rates of packet throughput and drops from both ends, and make decisions how to limit the impact of congestion. (see also this link to an earlier piece I wrote on AI vs. QoS). 

    Self-driving vehicles use onboard image-recognition. Data (real-world sensed data and "training" data) gets uploaded to the cloud, and algorithms downloaded. The collision-avoidance system will recognise a risk locally, in microseconds.



     They can focus resources on the most-important aspects: I saw a videoconference developer last week talk about using AI to spot "points of interest" such as a face, and prioritise "face packets" over "background packets" in their app. Selective forwarding units (SFUs) act as video-switches which are network-aware, device-aware, cost-aware and "importance-aware" - for example, favouring the main "dominant" speaker.
     
    Another comms developer (from Facebook, which has 400 million monthly users of voice/video chat) talked about the variables it collects about calls, to optimise quality and user experience "outcome": network conditions, battery level before & after, duration, device type & CPU patterns, codec choice and much more. I suspect they will also soon be able to work out how happy/annoyed the participants are based on emotional analysis. I asked about what FB wanted from network APIs and capabilities - hoping for a QoS reference - and got a blank look. It's not even on the radar screen.

    At another event, GE's Minds and Machines, the "edge" nodes have a cut-down version of their Predix software which can work without the cloud-based mothership when offline - essential when you consider the node could be on a locomotive in a desert, or on a plane at 35000ft.



    The simple truth is that there is no "end to end QoS" for Internet applications. Nobody controls every step from a user's retina to a server, for generic "permissionless innovation" applications and services. Paid prioritisation is a nonsense concept - the Net Neutrality crowd should stop using that strawman.

    Yes, there's a need for better QoS (or delta-Q or performance management or slicing or whatever other term you want to use) in the middle of networks, and for very specific implementations like critical communications for public safety. 

    The big unknown is for specific, big, mostly mono-directional flows of "content", such as streaming video. There could be an argument for Netflix and YouTube and peers, given they already pay CDNs, although that's a flawed analogy on many levels. But I suspect there's a risk there that any QoS payments to non-neutral networks get (more than?) offset by reverse payments by those networks to the video players. If telcos charge Netflix for QoS, it wouldn't surprise me to see Netflix charge telcos for access. It's unclear if it's zero-sum, positive or negative net.

    But for the wider Public Internet, for consumer mobile apps or enterprise cloud? Guaranteed (or paid) QoS is a myth, and a damaging one. Yes, better-quality, better-managed networks are desirable. Yes, internal core-network use of better performance-management, slicing and other techniques will be important for telcos. Private wireless or fixed broadband networks, where the owner controls the apps and devices, might be an opportunity too.

    But the concept of general, per-app QoS-based Internet access remains a dud. Both network innovation and AI are taking it ever-further from reality. Some developers may work to mark certain packets to assist routing - but they won't be paying SPs for an abstract notion of "quality". The notion of an "application outcome" is itself a wide and moving target, which the network industry only sees through a very narrow lens.


  • Thoughts on in-building wireless - and an upcoming client webinar

    I've been pondering some of the side-effects and necessary enablers of the accelerating wireless evolution path we're seeing. As well as spectrum issues I've covered a lot recently, deploying indoor infrastructure is going to be another one of them. 

    It is not a new assertion that indoor networks are important for enterprise. The frustrations of poor indoor cellular coverage are universal, while businesses of all types need to provide employees and guests with high-quality Wi-Fi.

    (I'll cover trends in home Wi-Fi in a later post, while I've already written about industrial facilities in a number of previous ones, such as here, as the issues are as much about spectrum as about infrastructure and planning.)

    Various solutions abound for providing good signal indoors ? distributed antenna systems (DAS), small cells, or even just deployment of lower-frequency bands in outdoor networks, with better penetration through walls. Yet costs remain considerable, especially as usage increases near-exponentially. Upgrading or retro-fitting existing installations often requires hard economic decisions, given that most such investments are not directly ?monetised?. Suitable expertise, foresight, planning tools and ongoing monitoring/reporting are important.

    The future, however, will accelerate the role of in-building/on-site wireless connectivity ? in both predictable and unpredictable fashion. If we consider what a building might look like in the year 2030, say ? and how it may be used and occupied ? we can start to see the challenges and opportunities.

    As well as today?s well-known and well-described uses of wireless (smartphones and laptops on Wi-Fi and cellular networks), we can expect to see a huge number of new uses emerge. This means that today?s implementations will require future-proofing, to support the unknowns of tomorrow. For example, consider the implications of: 

    • IoT deployments for smart buildings, such as a proliferation of sensors for heating, security, or the operation of elevators. These may require better coverage in unusual places ? in ceiling voids, lift-shafts, basements and so on. Bandwidth and latency requirements will vary hugely, from life-critical but low-data fire/carbon monoxide sensors, to networked video cameras, or once-an-hour reporting from water tanks.
    • Moving devices such as robots or automated trolleys, delivering products around the building. While some will be fully-autonomous, others will need constant wireless connectivity and control.
    • 5G networks will be deployed from around 2020, with further evolutions in following years. These may be extremely demanding on in-building coverage solutions, especially as some networks are likely to use frequencies above 6GHz ? perhaps even as high as 80GHz. Extensive use of MIMO and beam-forming may also add complexity to indoor implementations. (A new variant of WiFi known as WiGig also uses 60GHz frequencies)
    • Likely huge growth in narrowband wireless, connecting low-powered (but maybe very dense) networks of sensors or other endpoints. These may use 3GPP technologies such as NB-IoT, or other options such as LoRa and SigFox.

    All of these trends imply very different traffic patterns. It is not realistic just to extrapolate from current usage ? robots may go to places in buildings where humans do not, for example. Mobility requirements may evolve ? and so will regulations.

    It is not just new classes of device and application which will need to be supported by well-designed coverage infrastructure, but also new classes of service provider that need to access them.
    • The advent of new unlicensed or shared-spectrum models of frequency allocation (eg CBRS in the US, or MuLTEfire) may mean the arrival of new operator types ? dedicated IoT solutions providers that ?bring their own wireless?; enterprises acting as their own local on-site MNOs; various models of ?neutral host? and so on.
    • Private enterprise cellular networks are starting to become more widespread. Some governments are allocating spectrum for industries like utilities or smart-cities, while equipment vendors are offering optimised enterprise-grade cellular infrastructure.
    • Potential future regulations for emergency-services wireless connections. Police and fire authorities are increasingly using broadband mobile, both for humans and remote-sensing devices.
    • Distributed-mesh service providers, that operate as decentralised networks with micropayments, or as community initiatives. Some may use blockchain-type arrangements for shared-ownership or membership fees.
    One of the unknowns is about the convergence (or divergence) of different network types. On one hand, cellular networks are embracing Wi-Fi for offload, or for multi-network aggregation, especially as they worry that returning flat-rate data plans may stress their networks. On the other, some networks are looking at running 4G/5G in unlicensed spectrum instead of (or in addition to) Wi-Fi. Yet more service providers are adopting a ?Wi-Fi first? approach, reverting to MVNO models for cellular where needed. Future permutations will likely be more complex still. All will (ideally) need to be well-suppported by indoor wireless infrastructure.

    For property developers and owners, the quality of indoor networks is increasingly key in determining valuations and rental occupancy. Already seen in hotels, and office new builds, it will be important for today?s new constructions and refurbishments to support adequate flexibility and headroom for the next decade or more.

    This takes on further emphasis if you consider the trend towards ?buildings-as-a-service?, exemplified by organisations such as WeWork. These new classes of facility often incorporate wireless connectivity both as a billable service element, but also to enable their owners to manage the properties effectively, in terms of energy-efficiency and security. Other forms of monetisation and data-analytics around wireless location-sensing/tracking are also becoming more important.

    Lastly, in-building challenges will be driven by the specific location and industry, which themselves may change in nature over the next decade. New building materials, construction practices and regulations will impact wireless in unpredictable ways ? more metallic insulation perhaps, but also perhaps robot or pre-fabricated construction allowing wireless systems to be installed more easily. Individual industry verticals will have their own shifts ? what will retail stores look like, and how will customers behave, in the era of home deliveries by drone, but more on-premise ?experiences?, perhaps with AR/VR systems? What workplaces of the future look like, in an era of self-driving vehicles? Industrial facilities will become increasingly automated, with the largest uses of wireless connections being machines rather than humans. Hotels and airports will see shifts in data connectivity needs from employees and visitors, as application usage shifts.

    Small cells look certain to play a more important role in future, and Wi-Fi is going to remain the most important indoor technology for many users and businesses (ignore the fantasists who think it's at risk from 4G / 5G - see my earlier post here).

    There are no easy answers here ? even if you construct good scenarios for the future, undoubtedly we will be surprised by events. But some form of upfront discipline in designing and building indoor wireless solutions is ever more critical, given the unknowns. The more future-proofing is possible, the lower the potential risk of being caught out.
      
    On October 5th, at 3pm BST / 4pm CET / 10am EDT, I will be presenting on some of these topics on a webinar for client iBwave. A link to the event is here


  • Reinventing Telcos - a preview of my ITU World panel session
    Reposted from an article I wrote for the ITU's blog (link)

    On the 27th of September, I?m moderating a panel discussion at the ITU World 2017 conference in Busan, South Korea, on the theme of ?The transformation of telecom operators: reinventing telcos.?

    This is a topic we?ve heard discussed for at least the last 10 years in various forms, yet we still seem to be at or near the starting point. The panel will look at what can we do differently, to change the dynamics. In particular, it will focus on the internal organisation and processes of the telecom industry, both within and between telcos. Other conference sessions will consider new services, industry verticals, and the customer perspective.

    Across the globe, traditional CSPs are trying to adapt their cultures and operational models, in the face of ever-increasing competition and substitution from new players. As well as other rival service providers such as cable operators, telcos now face challenges from Internet-based peers, niche specialist SPs (for example in IoT), and even enterprises and governments building their own networks. On the horizon, new technologies such as AI threaten to change the landscape even more. The nature of what it means to be a ?service provider? is changing.

    This goes beyond just implementing next-generation networks, whether fixed or wireless. While these are necessary, they are not sufficient for true reinvention ? and they also require enormous new investment. The real question is what options exist for operators to best-allocate scarce resources (money, skills and time) to maximize the value from such investments in infrastructure. There is also a risk that emphasis on the ?hard challenges? of raising finance, acquiring spectrum or sites, and building networks, means less focus on the ?softer? problems of culture change, service design, organisation, customer-centricity and partnership.

    This in turn poses problems for regulators, especially at national levels. Usually driven by domestic politics and local economic situations, they somehow need to ensure a strategically-important sector remains healthy, while also recognising the huge global-scale advances from many technologies and services that transcend national or regional boundaries.

    It is not realistic for every country to have three or four competing local providers of social networks, IoT management tools or future AI platforms. Citizens and businesses expect similar functions to work internationally and immediately, with rapid incremental improvements. Unlike networks, innovation in services and applications often favours fast-evolving proprietary platforms, rather than committee-led interoperable services like the PSTN.

    Telcos ? and their regulators ? have until recently been poorly-suited to this new world, although some are making interesting attempts to ?turn the super-tanker?.

    The session will touch on four or five key areas:
    • Innovation: What is the best way for telcos to innovate, given regulatory & cultural constraints? Arms-length subsidiaries? Huge retraining programmes? Business units targeted on verticals / technologies? How much freedom should product units have, for example should they be forced to use the company-wide core network & NFV platforms, or should they be able to go ?off piste? and act independently? Are ?platforms plays? viable in telecoms, or just unrealistic wishful-thinking?
    • Regulation: What should regulators be doing, to simultaneously encourage new entrants/innovators, but also allow telcos to make enough returns to take long-term investment views? And how can regulators deal with the overlaps, competition and tensions between very distinct groups, such as traditional infrastructure-oriented telcos and Internet-based ?web-scale? platforms? One group has huge capex and strict regulatory constraints, the other huge R&D and greater risk of failure: how can one set of rules span both, where they intersect?
    • Industry coordination: How do the current pan-industry structures (eg bodies like ITU & GSMA & 3GPP) need to change? Can they be made faster, more willing to take risks, faster to acknowledge errors, bring in non-traditional stakeholders?
    • Technology catalysts: Are 5G & NFV really ?transformational? enablers of re-invention? Or will prolonged hybrid/transition phases from older tech mean there can?t be fast shifts? How should telcos deploy technologies such as AI, blockchain or IoT internally, as part of their reinvention?
    One other thing should frame the debate: language ? how we describe the problems, or wider communications environment. Words, analogies and narrative arcs are psychologically important ? they shape the way we perceive problems, and can either enhance or misdirect our responses. We should recognise the unhelpfulness of terms like:
    • ?Digital?: Morse Code was digital in 1843. Telecom networks have used digital technology for decades, as have most businesses. It?s about steady progress and evolution, not a ?digitalisation? step change.
    • ?OTT?: usually said in a negative tone, I believe this prejudiced description of Internet services has hugely harmed the telecoms industry over the last decade. For example, it obscures the fact that larger Internet companies do more deep technology than telcos: they make network equipment and chips, build infrastructure and conduct billions of dollars of R&D.
    • ?Level playing field?: telco executives, regulators and lobbyists use this phrase with abandon. Yet the analogy is meaningless, when everyone is playing different sports entirely.
    The narrative needs to change substantially. My ITU Telecom World 2017 session aims to reset the debate, and catalyse thoughtful (but rapid!) future action by operators, regulators and industry bodies alike.


    If you are seeking a moderator or speaker for a telecoms strategy or policy event, please contact information AT disruptive-analysis DOT com


  • Huawei Connect: IT services, Enterprise Cellular, video analytics, AI and more
    I spent most of last week in Shanghai, attending Huawei's Connect conference and trade show. It was a good chance to get a deep-dive into the company's enterprise activities, as well as get my head around China's broader trends and influences around the technology sector.

    I normally engage with Huawei through its analyst relations function, but this trip was organised by a different team. The company apparently considers me a "KOL" ("key opinion leader"), which is a rather diffuse bucket used for a mix of outspoken independent analysts, public-facing academics, video/social bloggers and assorted others. I'm not sure I set out to lead opinions, but I'm certainly happy to voice my own.

    (Unlike the analyst events I usually attend, the KOL group isn't really made up of direct competitors, so there's a more collegiate atmosphere - and a very lively WeChat group, partly with logistics about meeting times/locations but also sharing photos or thoughts about the event).

    Connect is mostly driven by Huawei's enterprise business unit, which is growing fast (about $6bn revenues in 2016, up 47% [link]), and focuses on cloud and big "infrastructure-led" IT and networking projects. So sectors like smart cities, advanced manufacturing, oil and gas IoT, systems for transport sectors like rail and ports and so on. There's a heavy emphasis on IoT platforms and networks, cloud and storage, video/image surveillance analysis and a lot of AI. 

    It clearly intends to be a very significant player in its chosen sectors, using its existing high IT profile in China, plus its global telecom footprint, as a springboard for other international ICT theatres. Unlike Europe, North America and India, China has few global-scale IT companies, especially in systems integration or outsourcing. The closest to a "Chinese version of IBM" is probably ChinaSoft, which has a deep partnership with Huawei anyway, and in which Huawei owns a significant shareholding.

    Thinking more about technology-sector comparables, very few have a similar blend of infrastructure/network/telecom expertise, systems integration/services scale and cloud capabilities. Given Ericsson's recent announcements of pulling back on direct enterprise-related initiatives to focus on CSPs and its Cisco partnership as channels (a strategic error, I feel), it's only really Nokia and maybe NEC that have the scope to push the same big-infrastructure enterprise "ICT" vision, although even it doesn't have the full-scale IT services business that Huawei does. Perhaps there's yet more scope for consolidation between traditional IT companies and networks. (Ericsson+IBM? Nokia+HP? NEC+Tata? Who knows....)

    One other thing stood out about the event: there was very little spoken about telco networks, Huawei's main business, or the synergies between that business unit and its faster-growing enterprise sibling. 

    There was much more about robots and face-recognition than network-slicing and NFV. The main mention of IMS that I saw was in the context of critical communications for public safety, eg push-to-talk. The X-Labs group assessing possible future 5G use-cases was talking about connected drones, or cloud-integrated video-enabled helmets for the blind. There was a "carrier" section in the vertical-industries show hall, but that seemed mostly focused on cloud solutions for telcos.

    Conspicuously, there was almost no reference to delivery models for network or IoT capabilities for enterprises. There was no assumption that everything would be provided "as a service", or in particular, delivered by a CSP. There was tacit recognition that some organisations want to own their own infrastructure / private clouds, some may go to a specialist integrator (eg an automation/IoT specialist like Honeywell or GE), and some might use an arm of a telco. For example, T-Systems, Deutsche Telekom's IT unit, was there talking about a Huawei-based storage cloud, deployed for CERN, the leading nuclear and particular research institution on the Swiss/French border.

    Huawei also offers its own cloud services, but is quite self-effacing about it, only wishing to become "one of the top 5 clouds" (presumably along with Amazon, Google, IBM and maybe Microsoft - which it also partners) and saying that "1% is enough for us". I don't think Jeff Bezos is going to have too many sleepless nights, although Alibaba, Cisco and Oracle may have different opinions on the top tier's members, the former especially in China itself.


    In terms of specific takeouts on my normal coverage areas, a few things stood out:
    • Enterprise Cellular: This was everywhere at the event, under the brand eLTE. This is a sort of pre-cursor to a MuLTEfire / CBRS model of non-carrier cellular networks. There's a quite large eLTE ecosystem, especially around public-safety networks but also manufacturing, transport and other verticals. There was a demo of a robot connected with private cellular. There are 3 variants:
      • An unlicensed LTE-U version that doesn't need a licensed "anchor" like LAA, so can be deployed by any organisation
      • A licensed-band version, where organisations (such as law-enforcement or utilities) can manage to get dedicated spectrum by one means or another
      • A narrowband version, which is essentiially NB-IoT in unlicensed bands such as ISM spectrum (which in China, is in the 500MHz range, or 900MHz in the US)
      • All of these were targeted at industry verticals. There wasn't any mention of other use-cases like neutral-host providers, hybrid MNO/MVNOs, mesh networks, or consumer-oriented plays. 
      • There wasn't any explicit mention of shared-spectrum models like CBRS, but it seems to fit under the second category.
      • This all fits nicely with the recent work I've done on private/enterprise cellular. It will be an ongoing theme as it is clearly "happening", including presentations at a few upcoming regulatory conferences, and another workshop with Caroline Gabriel in London on Dec 1 (link)
     

    •  IoT networks: There was a huge emphasis on NB-IoT around the event, as well as broadband 4.5G/5G options for drones, connected vehicles and more demanding applications. I didn't see an mention of LoRA, SigFox, or even LTE-M or Cat1 though, but WiFi and ZigBee cropped up on various slides. Some interesting examples of NB-IoT deployments, notably for cities, or specific OEM-led integrations such as China's booming shared-bicycle sector.
    • Video and facial networks/analytics: This was a huge theme, as it bridges Huawei's key domains of mobile broadband, cloud services and AI. A major focus is "safe cities", especially using networked video cameras to manage traffic, enforce public safety - and track/spot individual people, whether that is missing children, criminals, or attendees at a trade show. (I joked on Twitter that Huawei had probably been tracking people around the event itself - only for the next slide to reveal that it had been doing exactly that). Missing from most of the material was much mention of privacy - which appears to be less of a concern in China than it would be in much of Europe. That said, we may be fighting a losing battle on that front, as this week's Economist cover & feature articles on face-recognition point out (link).
     
     
     
    • AI: Beyond video-analysis, a central thrust of the event was around machine-learning, graph analysis, image-recognition and other forms of AI.  I didn't get a chance to go into too much depth on this, but it's pretty clear this is central to Huawei's cloud ambitions, and probably will link into carrier-domain services like smart-home / personal voice assistants as well as "big data" corporate applications
    • We also had a briefing with the handset unit, which discussed the new Kirin AI-oriented chip which includes a neural processing element, as well as CPU, GPU and DSP. This should enable better and more power-efficient local classification of images, without the need to send all data to the cloud. This fits into my ongoing debate on whether 5G's low-latency business case might be undermined by more edge-processing. (link)
    • WiFi: Although not as big an emphasis as 4G/5G, Huawei nevertheless had a fair bit of WiFi on display, particularly for large-scale deployments in cities or large public venues like sports stadia. It also had an interesting hybrid WiFi / IoT networking unit, which for now focuses on Bluetooth, RFID and ZigBee but I guess could incorporate NB-IoT (or its eLTE variant), or even LoRa if a client wanted.
    • UC/UCaaS: Although not a major focus of the event (itself quite telling) there was a fair bit of unified communications, conferencing and even cPaaS around the show. There was a Broadsoft-style UC platform for operators, and various tools for multi-party meetings. It's not obvious that Huawei is aiming to be a Twilio / Tokbox-style platform provider though, although it does have APIs (including WebRTC) for embedding communications in apps and websites. I didn't see any signs of a Slack/Spark/HipChat rival. Notably, Huawei is partnering Microsoft on Office365, so may not launch its own full UcaaS direct-to-enterprise product. 
    • I liked one partner booth in particular "Call Cloud", which uses a crowd-style / sharing economy approach to sourcing customer-service reps, with in-app video. It apparently has 7 million (!) people signed up as potential providers of informal information or support.

    Overall, an interesting few days for me, exploring a side to Huawei I hadn't seen before. It's always hard to get a full perspective from a single-vendor event, but it struck me as one of the only real, fully-encompassing examples I've seen of an acronym I normally dislike - ICT. That said, some more candour about positioning vs. competitors would have been welcome. We all know who they are - so descriptions of differentiation would have been useful, even if rose-tinted.

    It's also brought home to me how important it is to have a captive market to drive scale, which can then improve adoption rates (and prices) elsewhere. Amazon does it with AWS - its own huge retail business is an "anchor tenant" which helps create traffic volumes that then became reinforced by third parties' cloud usage. Huawei appears to do something similar with domestic government and enterprise business - millions of CCTV cameras, or large-scale city networks, or local IoT uses are helping it exploit pre-existing scale and experience, and then apply elsewhere. There is also a sensible approach to partnering, for example around IoT, with the likes of GE collaborating on distinct parts of the market.

    One final comment: the layout of the trade show was excellent. One hall was organised per-vertical, with sections on Manufacturing, Public Safety, Oil & Gas, Finance etc. The other hall was per-technology, with sections on Cloud, eLTE, WiFi, NB-IoT, Developers and so on. I wish other events were similarly well-structured.



Joomla Templates by Joomlashack