• JoomlaWorks Simple Image Rotator
  • JoomlaWorks Simple Image Rotator
Telecom Blog Feeds
Dean Bubley's Disruptive Wireless
Dean Bubley's Disruptive Wireless: Thought-leading wireless industry analysis

  • 5G & IoT? We need to talk about latency


    Much of the discussion around the rationale for 5G ? and especially the so-called ?ultra-reliable? high QoS versions ? centres on minimising network latency. Edge-computing architectures like MEC also focus on this. The worthy goal of 1 millisecond roundtrip time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, the ?tactile Internet? and remote drone/robot control.

    Usually, that is accompanied by some mention of 20 or 50 billion connected devices by [date X], and perhaps trillions of dollars of IoT-enabled value.

    In many ways, this is irrelevant at best, and duplicitous and misleading at worst.

    IoT devices and applications will likely span 10 or more orders of magnitude for latency, not just the two between 1-10ms and 10-100ms. Often, the main value of IoT comes from changes over long periods, not realtime control or telemetry.

    Think about timescales a bit more deeply:

    • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems ? so an engineer might be sent a month before the normal maintenance visit.
    • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner?s garage, as it's not time-critical).
    • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
    • A temperature sensor and thermostat in an elderly person?s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
    • A shared bicycle might report its position every minute ? and unlock in under 10 seconds when the user buys access with their smartphone app
    • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
    • A networked video-surveillance system may need to send a facial image, and get a response in a tenth of a second, before they move out of camera-shot.
    • A doctor?s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second ? ie every 10ms
    • A rapidly-moving drone may need to react in a millisecond to a control signal, or a locally-recognised risk.
    • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
    • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
    I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into these very-different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I?d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they?ll need fibre ? or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it?s an ?edge? node.

    I suspect (this is a wild guess, I'll admit) that the proportion of IoT devices, for which there?s a real difference between 1ms and 10ms and 100ms, will be less than 10%, and possibly less than 1% of the total. 

    (Separately, the network access performance might be swamped by extra latency added by security functions, or edge-computing nodes being bypassed by VPN tunnels)

    The proportion of accrued value may be similarly low. A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm?s moisture sensors and irrigation pumps don?t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

    Are we focusing 5G too much on the occasional Goldilocks of not-too-fast and not-too-slow?


  • Machine-learning & operations for telcos; and a discount-code for AI World
    A lot of discussion about deep/machine learning in the telecoms industry focuses on either customer management, or applications in various enterprise verticals. Typical use-cases are around assisted self-care for end-users (for example, diagnosing WiFi configuration problems), or spotting customer behaviours that suggest imminent churn (or fraud). 

    Some telcos are looking at chatbots or voice assistants, for example for connected-home applications. Then there are offers around image recognition, perhaps for public-safety use, where telecoms operators have traditionally had a major role in many countries.

    All this remains very important, but recently, I've been seeing a lot more "behind-the-scenes" use-cases being discussed, as both mobile and fixed operators look to improve their operational effectiveness and cost-base. Many of these are less-glamorous, and less-likely to highlight in non-telecoms articles, but they are nonetheless important. 

    A few examples have been:
    • BT has talked about using ML to improve efficiency of maintenance staff schedules, depending on particular tasks and base locations.
    • Vodafone talked at the recent Huawei Mobile Broadband Forum about AI being used in radio networks for predictive load-balancing, predicting patterns of users/usage, and optimising configurations and parameters for better coverage and throughput. It also referenced using ML to help distinguish real from fake alarms
    • KT in Korea, talking about collecting 50TB per day of operational data from its fibre network, and using it to optimise performance, improve security and predict faults. (It also realised that it accidentally created a huge realtime seismic detector, if earthquakes - or maybe North Korean nuclear detonations - flex the fibres)
    • Telefonica working with Facebook's Telecom Infra Project initiative to map population density (from satellite images) to network usage data, to work out coverage gaps (see here)
    As well as traditional telecom operators, new breeds of Internet-based communications providers are also looking at instrumenting their services to collect data, and optimise for multiple parameters. For example, Facebook (which is a "new telco") is improving its voice/video Messenger app, by collecting data from its 400 million users. This involves not just call "quality", but maps codec use to battery levels on mobile devices, and various other measureables. Potentially this allows a much broader type of optimisation than just network-centric, by considering the other trade-offs for users such as length of call vs. power consumption vs. video quality.
    The key for all of this is collection of operational data in the first place, whether that is from network elements, end-user devices - or even external data sources like weather or traffic information.

    I'll be digging into this in various future posts - but I'll also be speaking at various conferences and panel sessions about Telecoms & AI in coming months.

    In particular, I'm on a Mobile & AI Panel at AI World in Boston, which runs from Dec 11-13. Details are at https://aiworld.com/ - and if you want to attend, I have a code for a $200 discount for 2 and 3-day VIP Conference Passes: AIW200DA

    In January, I'll also be covering AI at the PTC'18 event in Honolulu from Jan 21-24 (link here).

    And in April, I'll be at the AI Net event in Paris (link here) moderating a panel and also talking about AI in smartphones.

    Overall - I'm expecting a huge #TelcoFuturism push around all aspects of AI in telecoms in 2018, but it's especially the operational and network-management functions that I think will make a big difference. It also coincides with the arrival of both 5G and large-scale NFV, and the intersection points will have a further multiplicative effect.



  • 2nd Workshop on Enterprise/Private Cellular, December 1st, London
    NEW: Disruptive Analysis & Rethink Research joint workshop on Enterprise Cellular Networks, London, December 1st, 2017

    At the end of May, I co-ran a day-long workshop with Caroline Gabriel covering enterprise and private LTE, for industry verticals, neutral-hosts, indoors, government and other environments.

    We had c20 people involved, with a mix of presentations, group discussions and exercises, and ample time for networking. The event was held under Chatham House rules, so people could talk confidentially without direct attribution of comments.

    Six months later, we're repeating the workshop, on December 1st in Central London. A limited number of places are available.

    New mobile devices/applications and the emergence of the Industrial IoT means that high-quality ? often mission-critical ? networks are required for new systems and applications. These can span both on-premise coverage (eg at a port, factory, office, wind-farm or hospital) and the wide-area (eg for smart cities or future rail networks). 

    A lot has progressed in 2017, and I'm expecting 2018 to bring further developments:
    • Regulators in some markets have actively looked to provide frequency ranges for public safety, large businesses and other uses.
    • Every spectrum conference I've been to has had a session or two on shared bands, and is also taking unlicensed technologies more seriously.
    • Rules on CBRS have solidified in the US, albeit with possible changes proposed to the FCC. Many industries and major companies (eg the oil sector) are seriously engaged.
    • MulteFire is looking "real", with deployments (and devices) expected next year.
    • Vendors including Nokia, Ericsson and Huawei have all indicated growing interest in private cellular, as well as host of smaller players, or WiFi specialists looking to add cellular radios.
    • The 5G community is recognising that "verticals" may not always be best-addressed by traditional MNOs, and may require new models. Even most operators concede that they can't do everything - especially for industrial IoT connectivity or uses with heavy legal liability or certification requirements.
    • Various large industrial and utility/rail players have shown a lot of interest in private LTE, alongside WiFi meshes, LPWAN and other vertical-oriented network technologies.
    • Community, rural and emerging-market players have started to look at cellular in unlicensed/shared bands to reduce costs and improve coverage. 



    Workshop structure & Coverage

    The day will have a maximum of 30 attendees to ensure a high level of discussion and interaction. We expect a diverse mix of service providers, vendors, regulators and other interested parties such as enterprises, investors and developers. 

    It will be suitable for C-level executives, strategists, product management, marketing functions, CTO office, market analysts and regulatory affairs specialists.  

    It will be led by myself and Rethink Research?s Caroline Gabriel (link). We are both well-known industry figures, with many years of broad communications industry analysis ? and outspoken views ? between us.


     
    Topics to be discussed include:

    • Key market drivers: IoT, automation, mobile workers, vertical-specific operational and regulatory issues, indoor coverage, democratisation of wireless expertise
    • Spectrum-sharing, including unlicensed, light/local-licensing and CBRS-type models. What bands are different countries' regulators looking at? 2.6GHz, 3.5GHz, 4GHz, 28GHz, others?
    • Evolution of key enabling technologies such as MulteFire, 5G, NB-IoT, network-slicing, SDN, small cells, edge computing, and enterprise-grade IMS cores
    • Regulatory/policy issues: spectrum allocation, competition, roaming, repeaters, national infrastructure strategies and broader ?Industry 4.0? economic goals
    • The shifting roles of MVNOs, MVNEs, neutral hosts and future ?slice operators?
    • Numbering and identity: eSIM, multi-IMSI, MNC codes
    • How will voice & UC & push-to-talk work on private cellular networks?
    • Commercial impacts, new business model opportunities & threats to incumbents
    • Vendor dynamics: Existing network equipment vendors, enterprise solution providers, vertical wireless players, managed services companies, new industrial & Internet players (eg GE, Google), implications for BSS/OSS, impact of open-source
    (I've covered various of these themes in previous posts and presentations. If you want more detail about some of my thinking, see links here and here and here. We will be going into a lot more depth in the workshop itself. And for a quick 20-minute ?taster?, see the video of my presentation from the recent TADSummit event in Lisbon.

    The workshops will take place at the Westbury Hotel in Mayfair, central London [link]. It will run from 9am-5pm, with plenty of time for networking and interactive discussion. Come prepared to think and talk, as well as listen ? these are ?lean-forward? days. Coffee and lunch are included.

    The attendance fee is £699+VAT, with a discount for a second attendee. Payment can be made via credit card (see Paypal Buy Now button below), or purchase-order & invoice on request.  

    Note: sometimes Paypal can be a bit awkward, especially with corporate cards or accounts. Drop me an email if you experience any problems or for further details: 
    information AT disruptive-analysis DOT com


    Normal Price




  • Debunking the Network QoS myth
    Every few years, the network industry - vendors, operators & industry bodies - suffers a mass delusion: that there is a market for end-to-end network QoS for specific applications. The idea is that end-users, application developers - or ideally both - will pay telcos for prioritised/optimised connections of specific "quality", usually defined in terms of speed, latency & jitter (variability).

    I've watched it for at least a dozen years, usually in 3-year waves:
    • We had enterprise networks promising differentiated classes of service on VPNs or the corporate connection to the Internet. Avoid the impact of the marketing department watching cat videos!
    • We had countless failed iterations of the "turbo boost" button for broadband, fixed or mobile.
    • We had the never-realised "two-sided markets" for broadband, featuring APIs that developers would use to pay for "guaranteed QoS"
    • We had numerous cycles of pointless Net Neutrality arguments, talking about "paid prioritisation", a strawman of massive proportions. (Hint: no content or app developer has ever had lobbyists pleading for their right to buy QoS, only telcos asking to be able to sell it. Compare with, say, campaigns for marijuana decriminalisation).
    • We currently have 5G "network slicing" concepts, promising that future MNOs will be able to "sell a slice" to an enterprise, a car manufacturer, a city or whatever.
    • My long-standing colleague & interlocutor Martin Geddes is pitching a concept of app-focused engineering of networks, including stopping "over-delivering" broadband to best-efforts applications, thus forcing them to predict and right-size their demands on the network.
    In my view, most of these attempts will fail, especially when applied to last-mile Internet access technologies, and even more-especially to wireless/mobile access. There isn't, nor ever will be, a broad and open market for "end-to-end network QoS" for Internet applications. We are seeing network-aware applications accelerating much faster than application-aware networks. (See this paper I wrote 2 years ago - link).

    Where QoS works is where one organisation controls both ends of a connection AND also tightly-defines and controls the applications:
    • A fixed-broadband provider can protect IP telephony & IPTV on home broadband between central office & the home gateway.
    • An enterprise can build a private network & prioritise its most important application(s), plus maybe a connection to a public cloud or UCaaS service.
    • Mobile operators can tune a 4G network to prioritise VoLTE.
    • Telco core and transport networks can apply differential QoS to particular wholesale customers, or to their own various retail requirements (eg enterprise users' data vs. low-end consumers, or cell-site timing signals and backhaul vs. user data). 
    • Industrial process & control systems use a variety of special realtime connection protocols and networks. Vendors of "OT" (operational technology) tend to view IT/telecoms and TCP/IP as quaint. The IT/OT boundary is the real "edge".
    Typically these efforts are costly and complex (VoLTE was frequently-described as one of the hardest projects to implement by MNOs), make it hard to evolve the application rapidly because of dependencies on the network and testing requirements, and often have very limited or negative ROI. More importantly, they don't involve prioritising chunks of the public Internet - the telco-utopian "but Netflix will pay" story.

    There are a number of practical reasons why paid-QoS is a myth. And there's also a growing set of reasons why it won't exist (for the most part) in future either, as new techniques are applied to deal with variable/unpredictable networks.

    An incomplete list of reasons why Internet Access QoS isn't a "market" include:
    • Coverage. Networks aren't - and won't be - completely ubiquitous. Self-driving cars need to be able to work offline, whether in a basement car-park, during a network outage in a hurricane, or in the middle of a forest. The vehicle won't ask the cloud for permission to brake, even if it's got promised millisecond latency. Nobody pays for 99.99% access only 80% of the time.
    • The network can't accurately control or predict wireless effects at micro-scale, ie RF absorption or interference. It can minimise the damage (eg with MIMO, multiple antennas) or anticipate problems (weather forecast of rain = impact on mmWave signals).
    • End-user connections to applications generally go via local WiFi or LAN connections, which service providers cannot monitor or control.
    • No application developer wants to cut QoS deals with 800 different global operators, with different pricing & capabilities. (Or worse, 800 million different WiFi owners).
    • 5G, 4G, 3G and zero-G all coexist. There is no blanket coverage. Nobody will pay for slicing or QoS (if it works) on the small islands of 5G surrounded by an ocean of lesser networks.

    • "Applications" are usually mashups of dozens of separate components created by different companies. Ads, 3rd-party APIs, cloud components, JavaScript, chunks of data from CDNs, security layers and so on. Trying to map all of these to separate (but somehow linked) quality agreements is a combinatorial nightmare.
    • Devices and applications have multiple features and functions. A car manufacturer wouldn't want one slice, but ten - engine telemetry, TV for the kids in the back seat, assisted-driving, navigation, security updates, machine-vision uploads and so on all have very different requirements and business models.
    • Lots of IoT stuff is latency-insensitive. For an elevator maintenance company, a latency of a week is fine to see if the doors are sticking a bit, and an engineer needs to arrive a month earlier than scheduled.
    • I don't know exactly how "serverless computing" works but I suspect that - and future software/cloud iterations - take us even further from having apps asking the network for permission/quality on the fly. 
    • Multiple networks are becoming inevitable, whether they are bonded (eg SD-WANs or Apple's use of TCP Multipath), used in tandem for different functions (4G + SigFox combo chips), meshed in new ways, or linked to some sort of arbitrage function (multi-IMSI MVNOs, or dual-SIM/radio devices).  See also my piece on "Quasi-QoS" from last year (link)

    • Wider use of VPNs, proxies and encryption will mean the network can't unilaterally make decisions on Internet QoS, even if the laws allow it.
    • Increasing use of P2P technologies (or D2D devices) which don't involve service providers' control infrastructure at all.
    • Network APIs would probably have to be surfaced to developers via OS/browser functions. Which then means getting Apple, Google, Microsoft et al to act as some sort of "QoS storefront". Good luck with that.
    • No developer will pay for QoS when "normal" service is running fine. And when it isn't, the network has a pricing/delivery challenge when everyone tries to get premium QoS during congestion simultaneously. (I wrote about this in 2009 - link
    • Scaling the back-end systems for application/network QoS, to perhaps billions of transactions per second, is a non-starter. (Or wishful thinking, if you're a vendor).
    • There's probably some extra horribleness from GDPR privacy regulations in Europe and information-collection consent, which further complicates QoS as it's "processing". I'll leave that one to the lawyers, though.
    • It's anyone's guess what new attack-surfaces emerge from a more QoS-ified Internet. I can think of a few.
    But the bigger issue here is that application and device developers generally don't know or care about how networks work, or (in general) have any willingness to pay. Yes, there's a handful of exceptions - maybe mobile operators wanting timing sync for their femtocells, for example. Safety-critical communications obviously needs quality guarantees, but doesn't use the public Internet. Again, these link back to predictable applications and a willingness to engineer the connection specifically for them.

    But the usually-cited examples, such as videoconferencing providers, IoT specialists, car companies, AR/VR firms and so on are not a viable market for Internet QoS. They have other problems to solve, and many approaches to delivering "outcomes" for their users.

    A key issue is that "network performance" is not considered separately and independently. Many developers try to find a balance between network usage and other variables such as battery life / power consumption together. They also think about other constraints - CPU and screen limitations, user behaviour and psychology, the costs of cloud storage/compute, device OS variations and updates, and so on. So for instance, an app might choose a given video codec based on what it estimates about available network bandwidth, plus what it knows about the user, battery and so on. It's a multi-variable problem, not just "how can the network offer better quality".




    Linked to this is analytics, machine learning and AI. There are huge advances in tuning applications (or connection-managers) to deal with network limitations, whether that relates to performance, cost or battery use. Applications can watch rates of packet throughput and drops from both ends, and make decisions how to limit the impact of congestion. (see also this link to an earlier piece I wrote on AI vs. QoS). 

    Self-driving vehicles use onboard image-recognition. Data (real-world sensed data and "training" data) gets uploaded to the cloud, and algorithms downloaded. The collision-avoidance system will recognise a risk locally, in microseconds.



     They can focus resources on the most-important aspects: I saw a videoconference developer last week talk about using AI to spot "points of interest" such as a face, and prioritise "face packets" over "background packets" in their app. Selective forwarding units (SFUs) act as video-switches which are network-aware, device-aware, cost-aware and "importance-aware" - for example, favouring the main "dominant" speaker.
     
    Another comms developer (from Facebook, which has 400 million monthly users of voice/video chat) talked about the variables it collects about calls, to optimise quality and user experience "outcome": network conditions, battery level before & after, duration, device type & CPU patterns, codec choice and much more. I suspect they will also soon be able to work out how happy/annoyed the participants are based on emotional analysis. I asked about what FB wanted from network APIs and capabilities - hoping for a QoS reference - and got a blank look. It's not even on the radar screen.

    At another event, GE's Minds and Machines, the "edge" nodes have a cut-down version of their Predix software which can work without the cloud-based mothership when offline - essential when you consider the node could be on a locomotive in a desert, or on a plane at 35000ft.



    The simple truth is that there is no "end to end QoS" for Internet applications. Nobody controls every step from a user's retina to a server, for generic "permissionless innovation" applications and services. Paid prioritisation is a nonsense concept - the Net Neutrality crowd should stop using that strawman.

    Yes, there's a need for better QoS (or delta-Q or performance management or slicing or whatever other term you want to use) in the middle of networks, and for very specific implementations like critical communications for public safety. 

    The big unknown is for specific, big, mostly mono-directional flows of "content", such as streaming video. There could be an argument for Netflix and YouTube and peers, given they already pay CDNs, although that's a flawed analogy on many levels. But I suspect there's a risk there that any QoS payments to non-neutral networks get (more than?) offset by reverse payments by those networks to the video players. If telcos charge Netflix for QoS, it wouldn't surprise me to see Netflix charge telcos for access. It's unclear if it's zero-sum, positive or negative net.

    But for the wider Public Internet, for consumer mobile apps or enterprise cloud? Guaranteed (or paid) QoS is a myth, and a damaging one. Yes, better-quality, better-managed networks are desirable. Yes, internal core-network use of better performance-management, slicing and other techniques will be important for telcos. Private wireless or fixed broadband networks, where the owner controls the apps and devices, might be an opportunity too.

    But the concept of general, per-app QoS-based Internet access remains a dud. Both network innovation and AI are taking it ever-further from reality. Some developers may work to mark certain packets to assist routing - but they won't be paying SPs for an abstract notion of "quality". The notion of an "application outcome" is itself a wide and moving target, which the network industry only sees through a very narrow lens.


  • Thoughts on in-building wireless - and an upcoming client webinar

    I've been pondering some of the side-effects and necessary enablers of the accelerating wireless evolution path we're seeing. As well as spectrum issues I've covered a lot recently, deploying indoor infrastructure is going to be another one of them. 

    It is not a new assertion that indoor networks are important for enterprise. The frustrations of poor indoor cellular coverage are universal, while businesses of all types need to provide employees and guests with high-quality Wi-Fi.

    (I'll cover trends in home Wi-Fi in a later post, while I've already written about industrial facilities in a number of previous ones, such as here, as the issues are as much about spectrum as about infrastructure and planning.)

    Various solutions abound for providing good signal indoors ? distributed antenna systems (DAS), small cells, or even just deployment of lower-frequency bands in outdoor networks, with better penetration through walls. Yet costs remain considerable, especially as usage increases near-exponentially. Upgrading or retro-fitting existing installations often requires hard economic decisions, given that most such investments are not directly ?monetised?. Suitable expertise, foresight, planning tools and ongoing monitoring/reporting are important.

    The future, however, will accelerate the role of in-building/on-site wireless connectivity ? in both predictable and unpredictable fashion. If we consider what a building might look like in the year 2030, say ? and how it may be used and occupied ? we can start to see the challenges and opportunities.

    As well as today?s well-known and well-described uses of wireless (smartphones and laptops on Wi-Fi and cellular networks), we can expect to see a huge number of new uses emerge. This means that today?s implementations will require future-proofing, to support the unknowns of tomorrow. For example, consider the implications of: 

    • IoT deployments for smart buildings, such as a proliferation of sensors for heating, security, or the operation of elevators. These may require better coverage in unusual places ? in ceiling voids, lift-shafts, basements and so on. Bandwidth and latency requirements will vary hugely, from life-critical but low-data fire/carbon monoxide sensors, to networked video cameras, or once-an-hour reporting from water tanks.
    • Moving devices such as robots or automated trolleys, delivering products around the building. While some will be fully-autonomous, others will need constant wireless connectivity and control.
    • 5G networks will be deployed from around 2020, with further evolutions in following years. These may be extremely demanding on in-building coverage solutions, especially as some networks are likely to use frequencies above 6GHz ? perhaps even as high as 80GHz. Extensive use of MIMO and beam-forming may also add complexity to indoor implementations. (A new variant of WiFi known as WiGig also uses 60GHz frequencies)
    • Likely huge growth in narrowband wireless, connecting low-powered (but maybe very dense) networks of sensors or other endpoints. These may use 3GPP technologies such as NB-IoT, or other options such as LoRa and SigFox.

    All of these trends imply very different traffic patterns. It is not realistic just to extrapolate from current usage ? robots may go to places in buildings where humans do not, for example. Mobility requirements may evolve ? and so will regulations.

    It is not just new classes of device and application which will need to be supported by well-designed coverage infrastructure, but also new classes of service provider that need to access them.
    • The advent of new unlicensed or shared-spectrum models of frequency allocation (eg CBRS in the US, or MuLTEfire) may mean the arrival of new operator types ? dedicated IoT solutions providers that ?bring their own wireless?; enterprises acting as their own local on-site MNOs; various models of ?neutral host? and so on.
    • Private enterprise cellular networks are starting to become more widespread. Some governments are allocating spectrum for industries like utilities or smart-cities, while equipment vendors are offering optimised enterprise-grade cellular infrastructure.
    • Potential future regulations for emergency-services wireless connections. Police and fire authorities are increasingly using broadband mobile, both for humans and remote-sensing devices.
    • Distributed-mesh service providers, that operate as decentralised networks with micropayments, or as community initiatives. Some may use blockchain-type arrangements for shared-ownership or membership fees.
    One of the unknowns is about the convergence (or divergence) of different network types. On one hand, cellular networks are embracing Wi-Fi for offload, or for multi-network aggregation, especially as they worry that returning flat-rate data plans may stress their networks. On the other, some networks are looking at running 4G/5G in unlicensed spectrum instead of (or in addition to) Wi-Fi. Yet more service providers are adopting a ?Wi-Fi first? approach, reverting to MVNO models for cellular where needed. Future permutations will likely be more complex still. All will (ideally) need to be well-suppported by indoor wireless infrastructure.

    For property developers and owners, the quality of indoor networks is increasingly key in determining valuations and rental occupancy. Already seen in hotels, and office new builds, it will be important for today?s new constructions and refurbishments to support adequate flexibility and headroom for the next decade or more.

    This takes on further emphasis if you consider the trend towards ?buildings-as-a-service?, exemplified by organisations such as WeWork. These new classes of facility often incorporate wireless connectivity both as a billable service element, but also to enable their owners to manage the properties effectively, in terms of energy-efficiency and security. Other forms of monetisation and data-analytics around wireless location-sensing/tracking are also becoming more important.

    Lastly, in-building challenges will be driven by the specific location and industry, which themselves may change in nature over the next decade. New building materials, construction practices and regulations will impact wireless in unpredictable ways ? more metallic insulation perhaps, but also perhaps robot or pre-fabricated construction allowing wireless systems to be installed more easily. Individual industry verticals will have their own shifts ? what will retail stores look like, and how will customers behave, in the era of home deliveries by drone, but more on-premise ?experiences?, perhaps with AR/VR systems? What workplaces of the future look like, in an era of self-driving vehicles? Industrial facilities will become increasingly automated, with the largest uses of wireless connections being machines rather than humans. Hotels and airports will see shifts in data connectivity needs from employees and visitors, as application usage shifts.

    Small cells look certain to play a more important role in future, and Wi-Fi is going to remain the most important indoor technology for many users and businesses (ignore the fantasists who think it's at risk from 4G / 5G - see my earlier post here).

    There are no easy answers here ? even if you construct good scenarios for the future, undoubtedly we will be surprised by events. But some form of upfront discipline in designing and building indoor wireless solutions is ever more critical, given the unknowns. The more future-proofing is possible, the lower the potential risk of being caught out.
      
    On October 5th, at 3pm BST / 4pm CET / 10am EDT, I will be presenting on some of these topics on a webinar for client iBwave. A link to the event is here



Joomla Templates by Joomlashack