And the holy spirit, NVMe-oF?

Comment The amount of data being collected and held in systems is – yes, we know – increasing, as organisations generate and store data for real-time or post-real-time analysis.

One of the drivers behind this is digital transformation. When dealing with their banks, supermarkets, and airlines, people want to experience the same slick and quick interactions they enjoy with Amazon, Netflix, and Uber. If your IT system is stuck in the hard-drive era, and unable to chew through data at the same rate as that web-tier trio then, with apologies to Jack Swigert and Jim Lovell, Houston – you may have a problem.

These internet giants obviously run more virtual machines and access data faster than smaller enterprises saddled with legacy IT. That means, according to Forrester vice president and principal analyst Brian Hopkins, those using legacy tech risk becoming prey for faster-moving rivals.

So how do you stop getting eaten? One way is to transform your data center’s infrastructure, and an increasingly popular means of doing so is by using NVMe flash drives and the NVMe over Fabrics (NVMe-oF) protocol.

Enter Non Volatile Memory Express (NVMe)

According to IDC’s storage research vice president Eric Burgener it won’t be long before NVMe-based all-flash boxes take over and cannibalize the SAS-based all-flash array market. The reason? Growth in workloads that demand the performance offered by NVMe – think customer interactions, applications such as AI inference and advanced analytics, and devops where teams depend on fast iterations. Each of these demand high bandwidth and low latency.

As for NVMe-oF, Burgener reckons a transition will occur during the next three to four years. NVMe-oF takes the gains of performance and latency provided by NVMe and rolls that out over network fabrics such as Ethernet, Fiber Channel, and InfiniBand.

With NVMe-oF, you can reduce latency and increase throughput all the way from the software stack through to the storage array via the data fabric. It “makes sense for enterprises to understand what this technology can do for them so that they can integrate it into their own environments most cost-effectively,” according to Burgener.

Secret sauce

The big boost of the NVMe-oF protocol is that it delivers more IO operations in a shorter time, so you can run more applications on bare metal or on virtual machines. Translated to a business perspective, that means more applications and faster services on the same or – hopefully – a reduced server-storage footprint. Of course, external storage can be connected by the same Fiber Channel or Ethernet cables as before, though upgrading the switching to take advantage of NVMe-oF will enable the delivery of vastly more IO operations to your servers because everything is running more efficiently.

If you’re keeping an eye on costs, you should see a fiscal return: increased capabilities with, at minimum, no expansion on server-storage estate, though – ideally – consolidation meaning reduced hardware costs and software licensing with reductions in the associated costs of space, cooling, and power.

Operating-system blocker

At the root of this is the way server operating systems and storage networking technologies have processed storage requests. Historically, this has been slow, and perhaps hobbled your application servers’ performance. That’s important to remember, because NAND flash drives let you feed data faster to a server’s processors and system RAM with data access latency many times lower than disk: a SAS-interface SSD can take, for example, 30,000 nanoseconds compared to the roughly one-million nanoseconds for a disk data access – so 333.3 times faster.

However, the virtualized, multi-socket, multi-core servers in a modern data center demand more – much more: more data to be pumped into memory faster, which means rather than using SAS drives that have a single SAS queue and communicate using a SAS-to-PCI adapter to the PCIe bus, NVMe SSDs can link directly to the PCIe bus. NVMe is multi-queued, with the ability to support 64K separate queues with up to 64K requests.

Why does that matter? A bigger queuing system means computers can conduct more transactions simultaneously. We’re back to maximizing bang for buck: you have the same, or fewer servers, running a greater number of applications or serving more sessions to more users.

And so it is with NVMe. Each drive has an access latency around 150μs, and a single drive can deal with multiple accesses at once. A drive tray with, say, 24 SSDs can deal with many, many more. An NVMe-oF flash array could support many more IO operations per second than a disk array with the same number of drives. It is staggeringly more efficient at storing and delivering data than spinning disk arrays.

NVMe-oF

So, you have NVMe drives housed in an external storage array or storage server, and these systems can share capacity and responsiveness with servers. But your arrays, at least where block data access is concerned, are typically connected using Fiber Channel or iSCSI over Ethernet, which are much slower than the PCIe bus – and that’s a problem.

That’s because you have a complex IO stack: an IO request is made by an application going into the host operating system’s storage stack, then the Fiber Channel or iSCSI drivers, the network link, the array drive controller, the internal array network, and then the actual drives.

That takes time. Now, though, at least some of these steps can be avoided by extending the NVMe-oF protocol to tightly couple solid-state storage to a server or array controller’s processor and RAM via the PCIe bus and storage network fabric. This is important because when it comes to data-intensive applications, you are reducing the latency inherent in this multi-step process thereby serving applications faster.

Engineers found a way to do this by using remote direct memory access (RDMA) technology: it can connect servers and storage using the NVMe-oF protocol across a storage network. By using RDMA and NVMe-oF, storage requests no longer need to pass through the host operating system’s traditional storage IO stack or other controller hardware to reach a drive over the network. The speed is incredible, adding less than 10 microseconds of latency compared with a local directly-connected NVMe SSD.

What this means for you is that a 24-drive flash array connected to a set of application servers using NVMe-oF can satisfy many more IO requests than a 24 x flash drive SAS-connected array – and many, many more than a SAS-connected 24 x hard disk drive array.

An NVMe-oF storage system therefore means greater virtual-machine density in servers because it removes the data-delivery bottlenecks that would hold back the server’s processing cores. We’re back to talking about improved response times and the ability to run more applications: that means customer-facing and data-intensive applications able to scale and meet demand.

The future

What does this mean for your digital or big-data future? NVMe and NVMe-oF, at a fundamental level, mean a future-proofed storage layer, which will be the bedrock of your digitized business.

The combination of performance, capacity, and availability should mean faster throughput and lower latency for a new generation of applications. They mean, too, not just raw performance and reduced latency, but also greater flexibility. Let’s say you begin pooling storage virtually: with NVMe and NVMe-oF, you get the throughput to make all those SAN and NAS systems appear, act, and serve as one single system. That is of particular benefit if you’re considering a move to a hyperconverged and software-defined infrastructure as Gartner reckons a fifth of shared accelerated storage products will be based on NVMe by 2021.

There are other benefits, too: a consolidated SAN or NAS estate working harder that will translate into reduced software licensing costs, and savings in hardware, power, space, and cooling.

Together, NVMe and NVMe-oF promise to disrupt the data center. With more products expected from vendors as the market grows, IT leaders should now start planning the workloads to move and how to architect for a smaller, higher-capacity, and lower-cost data center. ®

Sponsored: Transforming infrastructure to enable top-performing development teams

Dodgy dealing in a mobile phone shop? What kind of a world?

The City of New York is suing T-Mobile US, alleging that it is unfairly exploiting people with its “Metro by T-Mobile” brand – its no-contract, pre-pay service.

Aimed at less wealthy people who are still in need of mobile phones, Metro promised they would no longer have to rely on “subpar devices, service and coverage”.

But according to the legal complaint (PDF): “Unfortunately for Metro consumers, T-Mobile’s management of Metro ensured that ‘subpar’ was exactly what consumers received – only now with the veneer of name-brand dependability.”

The city alleges Metro engaged in abusive sales tactics including selling second-hand phones as brand new devices, and – partnering with third parties also named in the complaint – signing people up for expensive finance arrangements including pricey leasing agreements without their consent and advertising misleading guarantee statements on its website.

The case accuses 56 stores in all five New York boroughs of illegal activity and holds both T-Mobile US and MetroPCS responsible for staff actions. The complaint alleges that consumers’ losses were substantial – it claims to have identified 21 second-hand iPhones which were sold for several hundred dollars each.

New York also claimed T-Mobe charged taxes that appeared to be “made up”, with colourful titles such as “device change taxes” and “device activation taxes”. These charges were “in addition to an activation fee of $15”, and accompanied a “failure to provide legal receipts,” the complaint states.

The case has been brought by NYC’s Department of Consumer Affairs and names T-Mobile US, MetroPCS New York and other defendants.

NYC is demanding T-Mob hand over all its gains from the sales and establish a fund to repay New Yorkers if the court finds in its favour, plus impose civil fines for all violations.

It has asked for a jury trial.

The complaint could have come at a better time for T-Mobile US, which is still trying to persuade New York, California and eight other states – in federal court – that they should approve the cellular telco’s proposed takeover of Sprint.

New York’s Attorney General Letitia James gave good quote on the deal: “This is exactly the sort of consumer-harming, job-killing mega-merger our antitrust laws were designed to prevent.”

Federal Communications Commission chief Ajit Pai has already agreed the $26bn merger should go ahead saying he believed the companies’ promise to rapidly set up 5G networks and improve rural access. ®

Sponsored: Disrupting with data – challenges for digital native organizations

Atlassian would love it if you joined its glorious cloudy future

Recent entrant to the billion-dollar collaboration club Atlassian is to unleash a free version of the Jira issue-tracking system amid a shake-up of its cloud pricing plans.

After price hikes left customers a little lighter in the pocket, the introduction of a free tier for Jira Software, Confluence, Jira Service Desk and Jira Core will be of interest to those keen to try but wary of the costs involved and the tedious trial periods of yesteryear.

Free options for Trello, Bitbucket and Opsgenie are already available, although they are, of course, a tad limited. Bitbucket, for example, is free for teams of up to five users, but if you want more developers, build minutes or storage then you’ll need to pony up $2 per user per month for the standard edition or $5 for the premium version.

Harsh Jawharkar, head of GTM for Enterprise Cloud at Atlassian, told The Register the move was “focused on choice”, adding that the company was keen to “broaden our reach” to startups and emerging markets that would otherwise be put off by the cost.

That free tier would be arriving “roughly within the next month”, according to Jawharkar.

Not that such altruism extends to the feature-set. As with many other free product tiers, there are limitations. Jawharkar told us there would be user limits, and indeed only 10 users (or three agents) can play, and file storage is limited to 2GB.

And support? Community only.

You can also forget all about audit logs for the freebie tier for Jira Software, Core, Service Desk and Confluence. Users will have to consider actually paying some money.

And, of course, Cloud Premium is where Atlassian would like those users to go as it heads to a glorious subscription future and away from the dark days of perpetual licensing.

CEO Scott Farquhar observed that “more than 90 per cent of our new customers start with one of our cloud products”, just in case there were any lingering doubts as to the direction of travel.

To sweeten the pill, Premium Jira Software users get unlimited storage and 24/7 support for $14 per user per month.

Jira Service Desk will also be joining the Premium gang, which has 99.9 per cent uptime Service Level Agreement.

We asked Jawharkar what that SLA would mean in practice – after all, a refund of a bit of an invoice is often little compensation compared to the cost to businesses when the cloud falls from the sky, and were told that “the industry standard approach” was being adopted.

That, according to Jawharkar, will “rely on service credits, which would then be used to offset whatever the customer is already paying”.

Atlassian insisted the approach was based on customer feedback.

Bitbucket, for example, had a bit of a chequered 2018, falling over in October and January.

Choose your own location

While the move from a seven-day trial period to a free tier is eye-catching, more significant for businesses considering Atlassian’s cloud is the arrival of some much needed privacy and security upgrades.

Most important will be control over data residency. Being able to select a physical location for that precious data is critical for companies with a regulatory or compliance need.

Atlassian boasted that customers would “soon” be able to pick a preferred location from anywhere in the company’s global footprint during onboarding although existing users could be in for a faff.

Jawharkar told us the plan was to allow customers to select North America or Europe for their data, hopefully by the end of 2019. “We’ll start with a sort of continental regional level,” he said, “then eventually, what we’ll do over time is figure out how to offer a more granular approach to data management.”

Since the company runs on AWS, that eventual list will likely bear a distinct resemblance to that of Amazon.

Also set to tickle those enterprise users is the move out of private preview for Google Cloud Identity and Microsoft Active Directory Federation Services for login integration and URL customisation.

The latter, which won’t be available until 2020, will start with subdomains for Jira (Confluence will follow later) and Jawharkar told us the plan was to eventually allow customers to map to their own domain.

Finally, as if to underline how keen the company is to persuade customers that they will live their best lives in the cloud, trial windows for existing customers will be axed.

“We want to put our money where our mouth is,” said Jawharkar. “This licence will give our on-prem customers the ability to try cloud on us for free.”

And ideally stop those customers from looking too hard at alternatives when planning that eventual migration. ®

Sponsored: Delivering on the multi-cloud dream: Clear strategies for success

Canada and EMEA see growth. As for the rest of the world…

“The bigger they are, the harder they fall” seems an appropriate phrase to describe the predicament facing server vendors. Following a record 2018, almost of all the major manufacturers recorded slimmer numbers in Q2.

IDC stats show market revenue declined globally by 11.6 per year-on-year to $20bn, and shipments were down 9.3 per cent to 2.961 million units.

“The second quarter saw the server market’s first contraction in nine quarters, albeit against a very difficult compare from one year ago when the server market realised unprecedented growth,” said IDC research manager Sebastian Lagana.

“Irrespective of the difficult compare, factors impacting the market include a slowdown in purchasing from cloud providers and hyperscale customers, an off-cycle in the cyclical non-x86 market, as well as a slowdown from enterprises due to capacity slack and microeconomic uncertainty,” he added.

The world’s cloud slingers hit pause on spending earlier this year, something noted by all the major enterprise vendors, and the cycle that carried IBM on a refresh wave with its Power range has run its course. There is also talk of trade tensions between China and the US denting confidence, as well as the slowdown in certain economies including the Middle Kingdom.

In revenue terms, Dell was the world’s largest server maker, selling $3.809bn worth of kit, down 13 per cent on the year-ago period. HP came in next with $3.607bn, down 3.6 per cent.

ODM Inspur, in third place, was the only major vendor to report growth, up 32.3 per cent to $1.438bn. It was also the only major top-five vendor to see shipments rise.

Lenovo was relegated to fourth spot as revenue shrank 21.8 per cent to $1.212bn, IBM dropped 27.4 per cent to $1.188bn and even the ODM Direct category – comprised of white-box builders based in the Far East – was down, falling 22.9 per cent to $4.232bn. The rest of the market fell 4.8 per cent to $4.536bn.

Canada was the fastest growing market, with 13.4 per cent revenue growth, followed by EMEA, up 2 per cent on aggregate. Japan was down 6.7 per cent, the wider Asia Pacific region was down 8.1 per cent, the US collapsed 19.1 per cent, Latin America was down 34.2 per cent, and China was down 8.7 per cent.

x86 revenues dropped 10.6 per cent to $18.4bn and non-x86 server dropped 21.5 per cent to $1.6bn.

Ouch. Ouch. Ouch. ®

Sponsored: Transforming infrastructure to enable top-performing development teams

Pro-democracy protesters in Hong Kong have been turning to a new app to communicate – one that does not use the internet and is therefore harder for the Chinese authorities to trace.

Bridgefy is based on Bluetooth and allows protesters to communicate with each other without internet connection.

Downloads are up almost 4,000% in the past two months, according to measurement firm Apptopia.

Texts, email and messaging app WeChat are all monitored by the Chinese state.

Bridgefy uses a mesh network, which links together users’ devices allowing people to chat with others even if they are in a different part of the city, by hopping on other users’ phones until the message reaches the intended person.

The range from phone to phone is within 100m (330ft).

The app was designed by a start-up based in San Francisco and has previously been used in places where wi-fi or traditional networks struggle to work, such as large music or sporting events.

Speaking to Forbes, co-founder Jorge Rios said of the spike in use in Hong Kong: “People are using it to organise themselves and to stay safe, without having to depend on an internet connection.”

The BBC understands that protesters are turning to Bridgefy in case the internet is cut off, or the so-called Great Firewall of China, which censors some parts of the web on the mainland, is instigated in Hong Kong.

A similar app, FireChat, has already been used in previous protests in Hong Kong and also in Taiwan, Iran and Iraq.

Media playback is unsupported on your device

Tear gas

Prof Alan Woodward, a computer security expert based at Surrey University, is not convinced such apps are really hidden from the authorities.

“With any peer-to-peer network, if you have the know-how, you can sit at central points of it and monitor which device is talking to which device and this metadata can tell you who is involved in chats.

“And, of course, anyone can join the mesh and it uses Bluetooth, which is not the most secure protocol. The authorities might not be able to listen in quite so easily but I suspect that they will have the means of doing it.”

The protest movement in Hong Kong grew out of marches against a controversial bill to allow criminal suspects to be sent to mainland China for trial. That has since been suspended, but the marches have continued and morphed into a broader pro-democracy movement.

Protesters have shown tech-savvy skills before, with pictures circulating on Twitter earlier this summer showing some defusing cartridges of tear gas with water bottles.

US Space Command launches probe – wait, is that the sound of a black helicopt

A loud boom heard over the US state of New York on Labor Day could have been the result of a fireball arriving from space… or a military jet thundering through the skies… or something else, according to the American Meteor Society.

Folks enjoying their Monday off were interrupted that late afternoon by what sounded like an explosion rippling through the center of the Empire State. A few people even called 911 to report the noise. Some feared it was a meteor detonating in Earth’s atmosphere, creating a fireball and subsequent blast.

Mike Hankey, operations manager at the non-profit society set up by amateur and professional astronomers, told The Register on Wednesday that the source of mystery rumbling has yet to be confirmed. It was earlier reported by the media that it was probably a lost space rock winging its way into our skies.

“We thought it might be a fireball, but we don’t know,” he told us. “All the evidence pointed to a fireball, a natural bright meteor that can explode in the atmosphere and create a sonic boom. But now there is some evidence that it could be from a military jet.”

NASA spots asteroid on crash course with Earth – with just hours to go

READ MORE

The meteor society has 19 reports of a fireball at 2109 UTC (1709 EDT), with sightings in New York, Pennsylvania, and even some parts of Canada.

“There are a few things about it that are weird,” Hankey continued. “More people seemed to hear the noise rather than see a light flash. Most of them who heard it were in the Syracuse and Rochester area. Also sensors detected that the noise was 20 kilometres from the ground – that’s too low for most fireballs.

“So, we don’t really know. I contacted the Air Force Space Command and they said they were investigating the matter, too.”

Hankey forwarded us an email he had sent the US military letting it know of the puzzling big bang. According to the ops manager, an official replied: “We’re checking on it with our folks in California. Thanks for letting us know. We’re getting media queries as well.”

We’ve asked Air Force Space Command for comment: we’ll let you know if they radio in with an explanation, or if the Men in Black suddenly show up at the office with those memory-wiping pens. We’ll also let you know if the Men in Black suddenly show up at the office with those memory-wiping pens.

We’ve asked Air Force Space Command for comment. ®

Sponsored: Disrupting with data – challenges for digital native organizations

One moron down, two to go

The script kiddie at the center of the Satori botnet case has pleaded guilty.

Kenneth Schuchman, 21, of Vancouver in Washington state, this week admitted [PDF] to aiding and abetting computer hacking in an Alaskan federal district court. In exchange for only having to confess to a single criminal count, and increasing his chances of a reduced sentence, Schuchman admitted he ran the destructive Satori Internet-of-Things botnets.

From July 2017 to late 2018, Schuchman, along with co-conspirators referred to by prosecutors as “Vamp” and “Drake,” built and maintained networks of hijacked devices: these internet-connected gadgets would be infected and controlled by the gang’s Satori malware, which was derived from the leaked Mirai source code. Schuchman, who is said to have gone by the handle “Nexus-Zeta,” admitted to taking the lead in acquiring exploits to commandeer vulnerable machines and add to them the botnets, while “Drake” apparently wrote the code for the malware, and “Vamp” handled the money.

The money, you ask? Yes, the crew would launch distributed denial-of-service (DDoS) attacks from their armies of malware-infected gear for cash: you could hire them to smash your rivals and other victims offline by overwhelming systems with internet traffic from the Satori-controlled botnets.

“All three individuals and other currently uncharged co-conspirators took an active role in aiding and abetting the criminal development and deployment of DDoS botnets during this period for the purpose of hijacking victim devices and targeting victims with DDoS attacks,” Schuchman’s plea deal paperwork reads.

The Satori malware preyed on a number of poorly secured IoT devices, including home digital video recorders (DVRs), surveillance cameras, and enterprise networking gear. The slaved units, once infected by Satori, mainly via weak passwords and known vulnerabilities in device firmware, were then put to use as DDoS cannons-for-hire.

Fresh botnet recruiting routers with weak credentials

READ MORE

In March 2018, the gang, according to Schuchman, had rechristened the Satori botnet as Tsunami or Fbot, and continued to infect thousands of devices – including 32,000 belonging to a Canadian ISP, and 35,000 High Silicon DVRs – and potentially as many as 700,000 total.

By then, the botnet was primarily being used to cripple the servers of various online games, as well as attacking gaming server provider Nuclear Fallout. Schuchman would at times brag his army of bots could blast out at least 100Gbps, and at one point even 1Tbps, of junk network traffic.

Though he was indicted in August 2018, US prosecutors say Schuchman not only continued his illegal activities, but became even more active and aggressive. Later that year, Schuchman had a brief falling out with his co-conspirator “Drake” and would eventually call a police SWAT on his former buddy – a move that resulted in a “substantial law enforcement response” showing up at the ex-pal’s home.

“At all relevant times, Schuchman knew and understood that these botnets were was designed to be used, and was in fact being used, to commit illegal and unauthorized DDoS attacks against computers in the United States and elsewhere,” prosecutors said.

“Schuchman acted with the intent and goal of aiding, abetting, and furthering these illegal DDoS attacks and causing them to occur.”

Though the plea deal paints Schuchman as playing a key technical role in the gang, reports from around the time of his arrest mid-2018 tell a different story. In those accounts, Schuchman is presented as a hacking novice who was in over his head with the Satori botnet.

Infosec bods working on the case point to a number of posts Schuchman made under his Nexus-Zeta handle asking basic questions about setting up exploits and maintaining botnets.

Prosecutors may have agreed with that assessment, as the plea deal allows Schuchman to avoid a Computer Fraud and Abuse Act charge, and does not include any charges for the swatting attack.

He is due to be sentenced on November 21. ®

Sponsored: Transforming infrastructure to enable top-performing development teams

Ad giant gets slap on the wrist, promises not to do it again

Google, fighting a desperate battle to provide privacy that’s not so private it blinds targeted advertising, has agreed to provide actual privacy, but only to those watching videos aimed at children.

On Wednesday, the US Federal Trade Commission said Google and its YouTube subsidiary will pay $170m to settle charges brought by the FTC and the New York Attorney General that the online video service gathered personal information from children without parental consent.

That’s a record for penalties under the Children’s Online Privacy Protection Act (COPPA) Rule and a rounding error for Google, which earned $30.74bn in pure profit last year.

COPPA gives children something that adults don’t have in the US – privacy protection strong enough to prevent ad-oriented tracking. As Google has – at long last – discovered, using cookie files to track viewers younger than 13 and deliver targeted ads without parental permission may lead to a modest fine and a public scolding by regulators.

You ain’t getting around UK data laws on a technicality, top judge tells Google

READ MORE

“YouTube touted its popularity with children to prospective corporate clients,” said FTC Chairman Joe Simons in a statement. “Yet when it came to complying with COPPA, the company refused to acknowledge that portions of its platform were clearly directed to kids.”

Not only did Google refuse to acknowledge the obvious fact that certain YouTube videos were aimed at kids, the web giant went so far as to deny its video site’s audience included any children.

The complaint [PDF] against the two organizations explains, “[I]n response to one advertising company’s questions regarding advertising on YouTube as it relates to a toy company and COPPA, Defendant Google’s employee responded, ‘we don’t have users that are below 13 on YouTube and platform/site is general audience, so there is no channel/content that is child-directed and no COPPA compliance is needed.'”

As part of the settlement agreement, Google and YouTube have agreed to create and maintain a system that allows YouTube channel owners to declare content intended for children, so the two companies and their content providers can comply with COPPA. The two companies will also be providing COPPA training for employees who interact with channel owners.

More significantly, Google and YouTube have agreed to respect the privacy of anyone watching videos intended for children.

“Starting in about four months, we will treat data from anyone watching children’s content on YouTube as coming from a child, regardless of the age of the user,” said YouTube CEO Susan Wojcicki in a blog post.

“This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications.”

Given that YouTube’s comment section has a reputation as an infamous cesspool, that’s a bargain price for privacy protection if you’re in the mood for kid vids. ®

Sponsored: Delivering on the multi-cloud dream: Clear strategies for success

Our ads? Stomping on people’s privacy? Never! Not us! sobs search giant

Brave, the maker of a Chromium-based browser with a focus on privacy, claims advertising giant Google flouts Europe’s data protection rules by effectively leaking netizens’ web browsing activities to advertisers.

In an essay published on Wednesday, Brave’s chief policy officer Johnny Ryan said Google’s Authorized Buyers real-time bidding (RTB) system – which is used by millions of websites to serve ads to visitors – “broadcasts personal data” about those visitors to thousands of ad-industry companies all day, every day.

Said data can be used to track netizens as they surf across the web, from site to site, in violation of the EU General Data Protection Regulation (GDPR), Ryan claimed.

Google states that when it shares marketing data it does so “without identifying you personally to advertisers or other third parties.” Non-personal data shared in an RTB broadcast may include data about income, age and gender, habits, social media influence, ethnicity, sexual orientation, religion or political affiliation. That’s how interest-based adverts are targeted at folks: when you land on a webpage that uses Google’s RTB, a package vaguely describing you is emitted to advertisers, whose automated systems bid slivers of money in real time to show you an ad that is, hopefully, relevant to your life.

Google insists that partners abide by its policies, which ban the identification and profiling of internet users using this shared information.

But Ryan suggests self-regulation is insufficient. He notes Google’s Authorized Buyers system, active over some 8.4m websites, appends a string of characters to Push Page URLs that third parties can use as an identifier. The string does not provide actual personal information like a name or address; rather it’s a unique pseudonymous marker that, when combined with other Google cookies, can be used for tracking user activities across websites.

In the US, this isn’t illegal; but it’s an alleged violation of Europe’s rules. Ryan provided this latest finding to supplement evidence submitted in a September 2018 complaint to the Irish Data Protection Commission (DPC). In May, this year, the DPC opened an investigation into Google’s GDPR compliance.

Plot twist: Google’s not spying on King’s Cross with facial recognition tech, but its landlord is

READ MORE

The mechanism by which Google is said to pass identifiers to partners, Ryan claims, is known as a hidden Push Page, which loads without being seen by the website visitors and initiates network requests to various programmatic ad services. Push Pages get served from a Google domain as HTML files named “cookie_push.html.”

“Each Push Page is made distinctive by a code of almost two thousand characters, which Google adds at the end to uniquely identify the person that Google is sharing information about,” Ryan explained in his post. “This, combined with other cookies supplied by Google, allows companies to pseudonymously identify the person in circumstances where this would not otherwise be possible.”

Companies invited to access a Push Page, Ryan says, all receive the same identifier for the person profiled, allowing them to cross-reference their internal profiles and trade them for a broad view of a user’s online activity.

Asked to comment, a Google spokesperson disputed Ryan’s characterization of Push Pages. “A cookie_push is not an ID and not an identifier,” a spokesperson said in an email to The Register. “It is a parameter for measuring end-to-end latency.”

“We do not serve personalized ads or send bid requests to bidders without user consent,” Google’s spokesperson continued. “The Irish DPC – as Google’s lead DPA – and the UK ICO are already looking into real time bidding in order to assess its compliance with GDPR. We welcome that work and are co-operating in full.”

The DPC did not immediately respond to a request for comment.

According to The Washington Post, more than half the State Attorneys General in the US are expected to announce an antitrust investigation into Google’s business practices next week. ®

Sponsored: Transforming infrastructure to enable top-performing development teams

Phone nicked at airport, $15k in fun bux drained from wallet

A bloke was arrested and charged with identity theft after, it is claimed, he emailed an apology meant for his victim to a police detective.

Darren Carter, 29, of Blackwood, New Jersey, was charged with one count of first-degree identity theft last week, and is being held in a Connecticut jail where he faces trial at the Norwalk Superior Court.

The case dates back to April 17, when an unnamed fella, from Westport, Connecticut, reported that his phone had been pilfered while on a trip in California, and a crypto-coin wallet connected to the device had been drained of several thousand dollars in fun bux. The stolen dosh was transferred to a PayPal account, it is claimed.

“The victim reported that while traveling in California his cellular phone had been stolen in an airport,” said the Westport Police Department in a statement. “A few hours after the theft of his phone, he became aware that $15,472.31 had been transferred out of his Coinbase account; an application in which crypto-currency is managed. It was learned that funds from this account were converted into United States currency which was then moved into a PayPal account.”

Eighty-year-old US ‘web scam man’ on the run after pocketing $250,000 in Dem ‘donations’

READ MORE

The plod believe Carter was the person who stole the phone, accessed the wallet, and transferred the funds: for one thing, the suspected thief emailed a confession and apology meant for the victim to a police officer probing the four-month-long case, it is alleged. You may think Carter allegedly emailed the victim, who then passed on the sorry note to detectives, but no, according to the police, the suspected thief straight up emailed the plod. Perhaps, if these claims are true, he thought the cops would simply pass the message on to the victim?

“Among various financial transaction records allegedly connecting Carter to the crime, he additionally sent an apology email intended for the victim to the investigating detective,” Westport PD noted.

“In this message he not only confessed to taking the victim’s phone while he was also traveling in California, but additionally admitted to transferring the victim’s Coinbase funds into a personal account.”

The cops claim the tell-all email is not the only piece of evidence implicating Carter, as other transaction records also allegedly link him to the theft.

Either way, police say they were able to recover all of the stolen cash from PayPal and return it to the victim. Carter, who was being held at a New Jersey jail for an unrelated case, was extradited back to Connecticut for arraignment.

Carter remains behind bars as he was unable to meet the $150,000 bond. ®

Sponsored: MCubed – The ML, AI and Analytics conference from The Register.