Software giant promises to extend protections across US

Microsoft has said that not only will it embrace a new data privacy law in California, due to come into force in the New Year, but will extend the same protections to everyone in the US.

In a blog post by the software giant’s chief privacy officer, Julie Brill is enthusiastic about the new law which has been the subject to extensive lobbying by tech giants like Google and Facebook to water down its contents.

Microsoft, as with Apple, appears to view strong privacy as an opportunity to differentiate itself from its online competitors. “Our approach to privacy starts with the belief that privacy is a fundamental human right and includes our commitment to provide robust protection for every individual,” Brill wrote, adding: “We are strong supporters of California’s new law and the expansion of privacy protections in the United States that it represents.”

She also took several pot shots at Congress’ ongoing failure to agree on a federal data privacy policy, noting that “a lack of action by the United States Congress to pass comprehensive privacy legislation continues to be a serious issue for people who are concerned about how their data is collected, used and shared… In the absence of strong national legislation, California has enacted a landmark privacy law.” Brill is a former commissioner of the Federal Trade Commission (FTC).

That law – the California Consumer Privacy Act (CCPA) – “marks an important step toward providing people with more robust control over their data in the United States,” she wrote, adding that it “also shows that we can make progress to strengthen privacy protections in this country at the state level even when Congress can’t or won’t act.”

Data privacy for everyone!

As it did with Europe’s GDPR legislation, when Microsoft said it would extend the same privacy rights to all its users across the globe, Microsoft will also extend the CCPA’s protections to all its US users, the post announced.

In many respects, the GDPR legislation is stronger that CCPA and so there is little in real terms that Micorosoft will have to undertake in adopting CCPA wholesale. Microsoft also benefits from being considered a “service provider” under the CCPA so much of the time it will not have to notify consumers before it sells personal information that it receives.

Regardless, Microsoft’s pro-privacy is notable and seemingly principled and stands in stark contrast to the efforts of other tech companies who have done everything in their power to undermine personal privacy protections. When it became clear that those tech firms had been unable to exert sufficient influence in the California legislative process, they immediately switched to Washington DC in an effort to pass federal legislation that would override California’s laws.

Under the CCPA, companies have to be open and transparent about their data collection and use of material gathered. The urles also have to allow Californian residents to see what data a company holds on them and to give them to option to stop it from being sold.

The law had an extraordinary genesis: three individuals concerned about the amount of data stored by tech companies developed a proposal to place on California’s ballot system where voters are allowed to make it law without having to go through the normal lawmaking process.

It quickly became clear that widespread concern over what companies like Facebook have done and continue to do with personal data meant that the ballot was very likely to pass. That led to a mad scramble by lawmakers in Sacramento to pass a law before the ballot deadline.

Rush law

By going through traditional channels, the law more flexible, and the ballot founders agreed to withdraw their measure if a California privacy law was passed in time. It duly was in what may be the fastest ever bill approval, but the approach also gave lawmakers a law to make changes.

California’s Attorney General joins the long list of people who have had it with Facebook

READ MORE

Tech companies then flooded Sacramento with lobbyists in a persistent effort to undercut the CCPA but ultimately, with privacy advocates carefully watching events and politicians fearful of a public backlash, the law was passed pretty much intact.

Efforts to pass a federal privacy law – which have been a decade in the making (or, more accurately, stalling) – continue to make little progress. The most recent effort by two Silicon Valley lawmakers was released last week.

“We are optimistic that the California Consumer Privacy Act – and the commitment we are making to extend its core rights more broadly – will help serve as a catalyst for even more comprehensive privacy legislation in the US,” wrote Brill.

“As important a milestone as CCPA is, more remains to be done to provide the protection and transparency needed to give people confidence that businesses respect the privacy of their personal information and can be trusted to use it appropriately.” ®

Sponsored: How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

Remember the UK DeepMind scandal? No? Just as well…

Google is at it again: storing and analyzing the health data of millions of patients without seeking their consent – and claiming it doesn’t need their consent either.

Following a controversial data-sharing project within the National Health Service (NHS) in the UK, the search engine giant has partnered with the second-largest health system in the United States, St Louis-based Ascension, to collect and analyze the health records of millions of patients.

According to a report in the Wall Street Journal, which claims to have seen confidential internal documents confirming the move, Google already has the personal health information of millions of Americans across 21 states in a database. The project is codenamed Project Nightingale and according to the WSJ, over 150 Google employees have access to the records of tens of millions of patients.

Neither patients nor doctors have been told about the project and have not given their consent to Google being given access to their health data. But Google is relying on a legal justification that says hospitals (under the Health Insurance Portability and Accountability Act of 1996) are allowed to share data without telling patients if that data is used to “only to help the covered entity carry out its health care functions.”

Google is using the data – which covers everything from lab results to doctor diagnoses to hospitalization records and connects it to patient names and their dates of birth – to develop new software that purports to use artificial intelligence and machine learning to provide valuable insights into health issues and even predict future health issues for individuals.

The whole approach may seem oddly familiar to Reg readers: we have extensively covered an almost identical scheme in the UK called DeepMind in which Google was found to be storing and analyzing data on over a million patients following a data-sharing agreement with the Royal Free Hospital.

Not this again

Neither the hospital nor Google sought or received permission from doctors or patients for the use of that personal data, sparking an investigation from the Information Commissioner’s Office (ICO) that found a host of problems with the scheme.

The Royal Free NHS Foundation Trust had failed to comply with the UK’s Data Protection Act when it provided the 1.6 million patient details to Google’s DeepMind, the ICO concluded. It also found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test.

The hospital was told to establish a proper legal basis under the Data Protection Act for the project and for any future trials, and outline how it will comply with its duty of confidence to patients in any future trial involving personal data. It was also told to complete a privacy impact assessment and commission an audit of the trial.

That subsequent audit itself proved controversial when it argued that the sharing of personal health data without consent had not broken any laws – despite the ICO and the UK’s Department of Health National Data Guardian concluding otherwise.

The report – commissioned by the trust – was limited in scope. It did not dig into Google initial data gathering but the current use of the “Streams” app that Google was developing. Most significantly, it concluded that the hospital had not breached its “duty of confidence” and justified that decision by claiming that the correct law to apply to the project was not data protection law but confidence law.

Under that law, the report argued, the data sharing was legally justified if its use did not “trouble a health professional’s conscience.” In other words, the legality of gathering and analyzing personal health data went from objective – you cannot do this without consent – to subjective – does this trouble my conscience?

Strained consciences

Cash-strapped hospitals’ consciences are likely to be more flexible when approached by a company that makes $138bn in annual income and is determined to use its systems to break into the health market, as indicated earlier this month by its $2.1bn acquisition of wearables company Fitbit.

The DeepMind audit also contained a number of other questionable assumptions. The auditors accepted Google’s argument that it needed to use very large databases of real patient data for safety reasons, although it didn’t dig into the basis for that claim. It didn’t dig into why Google needed to store that data either, or why Google needed to retain data indefinitely – in this case, going back eight years – as opposed to, say, a 12-month cut-off and deletion of old data.

Google did not even have a formal deletion policy. In response, the auditors referred to clinicians who said this was useful for context. The audit also repeatedly made the argument that because the hospital’s systems had data going back many years that the data given to and stored by Google was mere duplication.

That argument was attacked by critics who pointed out that a hospital exists to provide care to patients and is paid to do that job, whereas Google’s entire business model is based on compiling data on people and then monetizing it by charging advertisers access to people who may be interested in their products.

“It’s clinical care through a mass surveillance lens,” noted Eerke Boiten, professor of cybersecurity at De Montfort University. But the hospital’s auditors didn’t think that Google’s business model was relevant.

“In conducting our review, we considered if we ought to treat DeepMind differently from the Royal Free’s other information technology partners, such as Cerner,” the report said. “We decided that this would not be appropriate. DeepMind acts only as the Royal Free’s data processor… Given this limited mandate, we do not see why the Royal Free’s engagement with DeepMind should be any different from its use of other technology partners.”

Computer says whoah

There was also a technical assumption within the audit that raised eyebrows: it claimed that it was essential for Google to store all the data itself because the hospital’s IT systems wouldn’t be able to handle the load of Google’s database queries.

According to the audit “the technical barriers to move to a query-based model are insurmountable” – but there didn’t seem to have been any inquiry into the actual systems in place at the hospital and the auditors simply took their word for it.

Some other details about Google and DeepMind: it initially said that DeepMind operated independently and so the data was never going to make it way to Google’s larger database. But after having been cleared through the audit, Google then took over DeepMind entirely, subsuming it into its corporate umbrella – pulling it into its Google Health US arm, which is the same arm that has the data-sharing deal with Ascension exposed today.

Google also said that when it bought DeepMind that it would set up an independent AI ethics board. But three years later, it still had not created one. It only did so when journalists pushed on the matter and after the Royal Free scandal died down, Google disbanded it.

This time around, with Google entering the larger US market and with access to tens of millions of patients’ records, the tech giant has decided that rather than independent boards, it will hire staff and give them the same oversight role.

We’re hiring!

Last month, it hired Karen DeSalvo in the new role of chief health officer. DeSalvo was previous national coordinator for health IT under US president Barack Obama.

Google forks out $2.1bn for Fitbit – and promises not to exploit all that delicious health data to sling ads (honest)

READ MORE

A few months earlier, Google hired former FDA Commissioner Robert Califf to look after policy and healthy strategies. And both of them will report to former hospital executive David Feinberg.

In September, Google signed a 10-year deal with another US health provider, Mayo Clinic, to store its genetic, medical and financial records. That deal purposefully left the door open to Google developing its own software as a result of the data access but Mayo said any personally identifiable data would be removed before it was shared.

This latest project – Project Nightingale – does not appear to have the same privacy-protecting constraints.

In response to our questions, Google direct us to a press release put out today by Ascension. Nothing in the press release undercuts the WSJ report that 150 Google employees have access to the personal health records of tens of millions of Americans, nor does it address the issue of consent, or the claim that the data is not anonymized.

Instead, it refers to the project as a “collaboration” and says the deal with “modernize” its systems by “transitioning to the secure, reliable and intelligent Google Cloud Platform.” It also says that the collaboration will be “exploring artificial intelligence/machine learning applications that will have the potential to support improvements in clinical quality and effectiveness, patient safety, and advocacy on behalf of vulnerable populations, as well as increase consumer and provider satisfaction.” ®

Sponsored: Transforming infrastructure to enable top-performing development teams

Blame the algorithms – it’s the new ‘dog ate my homework’

Apple is being probed by New York’s State Department of Financial Services after angry customers accused the algorithms behind its new credit card, Apple Card, of being sexist against women.

The drama unfolded on Twitter over the weekend as David Hansson, creator of Ruby on Rails, the popular framework for the Ruby programming language, berated Apple for giving his wife, Jamie, 20 times less credit than him, despite them both applying for Apple Card using information from joint tax returns.

After the couple complained, Jamie’s credit limit was boosted until it matched David’s. They were told by two Apple representatives that the issue was down to its algorithms and that Jamie should check her credit score. It turned out, however, that Jamie’s credit score was actually higher than her spouse’s.

The Apple Card was created and designed by Apple, but is issued by Goldman Sachs. A statement from the US bank’s spokesperson said each credit card application is evaluated independently.

“We look at an individual’s income and an individual’s creditworthiness, which includes factors like personal credit scores, how much debt you have, and how that debt has been managed. Based on these factors, it is possible for two family members to receive significantly different credit decisions.”

Although Jamie has a higher credit score than her husband David, it’s possible that the disparity of their Apple Card credit limits is due to a difference between their personal incomes.

“I have never had a single late payment. I do not have any debts. David and I share all financial accounts, and my very good credit score is higher than David’s,” Jamie said on Monday.

“I had a career and was successful prior to meeting David, and while I am now a mother of three children — a ‘homemaker’ is what I am forced to call myself on tax returns — I am still a millionaire who contributes greatly to my household and pays off credit in full each month. But AppleCard representatives did not want to hear any of this. I was given no explanation. No way to make my case.”

As a homemaker, maybe Jamie put down a lower figure for her personal income in the application compared to her husband. It’s unclear, however, as David did not respond to requests for comment.

Personal income and an official inquiry

The Hanssons aren’t the only ones who experienced this problem. Steve Wozniak, co-founder of Apple, also received a ten times more credit limit than his wife Janet.

When The Register asked Steve if there was a big difference between the figure he and his wife put down when they applied for the card, we were told that the couple didn’t.

“We have no separate assets. No separate property. No separate bank accounts. No separate credit cards. Any exceptions are accidental or things we are stuck with but they are small and meaningless. We have unlimited credit cards with other suppliers,” he said.

Steve also explained that the majority of the couple’s income came from his speaking arrangements. “Our speaking agency, New Leaf Speakers, does a wire transfer of all funds directly into our joint account. It never even passes through my hands. When we married, Janet had as much as I in assets, and maybe more.

WTF? Apple iPhones shrank by more than $22bn in fiscal ’19

READ MORE

The algorithms used to determine how much credit an individual receives is not Apple’s responsibility, he added. Instead, Steve said it was Goldman Sachs’ or Mastercard’s problem.

“They will not tell us how they came to different levels for myself and Janet,” he opined. “Obviously when they ask for things like bank accounts, they don’t have a human call to see the joint status.”

Sometimes being a loudmouth on Twitter gets you somewhere these days, and in Hansson’s case it caught the attention of the New York State Department of Financial Services (NYDFS). The state’s financial regulators announced it was opening an official inquiry to investigate Apple Card’s algorithms.

“On Saturday morning, I read a Twitter thread from an Apple Card user — tech entrepreneur David Heinemeier Hansson — detailing how his card’s credit limit was considerably higher — twenty times — than that of his wife, despite his wife having a higher credit score,” said Linda Lacewell, Superintendent of NYDFS.

“I responded, announcing that the New York State Department of Financial Services (DFS) would examine whether the algorithm used to make these credit limit decisions violates state laws that prohibit discrimination on the basis of sex.”

Apple and Goldman Sachs were not immediately available for comment. ®

Sponsored: Transforming infrastructure to enable top-performing development teams

Admins snoozing on patching despite reports of active attacks

The flurry of reports in recent weeks of in-the-wild exploits for the Windows RDP ‘BlueKeep’ security flaw had little impact among those responsible for patching, it seems.

This according to researchers with the SANS Institute, who have been tracking the rate of patching for the high-profile vulnerability over the last several months and, via Shodan, monitoring the number of internet-facing machines that have the remote desktop flaw exposed.

First disclosed in May of this year, BlueKeep (CVE-2019-0708) describes a bug in the Windows Remote Desktop Protocol that allows an attacker to gain remote code execution without any user interaction. Microsoft has had a patch out for the bug since it was first disclosed.

Over the last week or so, reports came that researchers were spotting active exploits for BlueKeep being lobbed at their ‘honeypot’ systems. These attacks were found to be attempts by hackers to infect machines with cryptocoin-mining software and lead to a series of media reports urging users to patch their machines now that BlueKeep exploits had arrived in earnest.

According to SANS, those reports did not do much to get people motivated. The security institute says that the rate of BlueKeep-vulnerable boxes it tracks on Shodan has been on a pretty steady downward slope since May, and the media’s rush to sound alarms over active attacks did not change that.

With more hints dropped online on how to exploit BlueKeep, you’ve patched that Windows RDP flaw, right?

READ MORE

“The percentage of vulnerable systems seems to be falling more or less steadily for the last couple of months,”noted SANS researchers Jan Kopriva and Alef Nula, “and it appears that media coverage of the recent campaign didn’t do much to help it.”

That doesn’t however, mean that there is no threat of a BlueKeep malware outbreak. While the SANS duo say that BlueKeep machines are decreasing in number, there are still more than enough exposed boxes to make for an attractive exploit target.

“Since there still appear to be hundreds of thousands of vulnerable systems out there,” they point out, “we have to hope that the worm everyone expects doesn’t arrive any time soon.”

Fortunately, this week will be a good time for users and admins to get themselves caught up on patches for BlueKeep and other security fixes that have been posted over the Summer by Microsoft.

With the November edition of Patch Tuesday slated to land tomorrow, users can fire up Software Update and get that and previous security fixes to make sure they are protected from all of the known vulnerabilities. ®

Sponsored: Technical Overview: Exasol Peek Under the Hood

Apple has banned an app that let people monitor others’ activity on Instagram.

Like Patrol charged a fee to notify users which posts their friends had “liked” and who they had recently followed.

The action comes a month after Instagram had tried to force the app to shut down after accusing it of scraping people’s data without their consent.

Like Patrol’s Mexico-based developer insists the app merely utilised public data.

And Sergio Luis Quintero told the BBC that he now plans to challenge Apple’s ban.

“We plan to appeal this decision in the coming days,” he said.

He added that he also intended to make Like Patrol’s code open source so that others could reproduce its functionality.

The app’s removal was first reported by Cnet.

Like Patrol was never offered on Google Play.

‘On steroids’

Until recently, Instagram offered its own more basic means to see what friends were up to on its platform.

But it removed the Following Tab in October after acknowledging some users had been “surprised” to learn their activities could be tracked via the facility.

Mr Quintero had described Like Patrol as being a version of the tab “on steroids”.

The “insights” it offered included:

  • a way to expose “lustful behaviour” by tracking all the “likes” a followed person had given to “models”
  • a means to identify “flirtatious behaviour” by providing a list of whom their friends had interacted the most with via comments and “likes”
  • a way to keep track of each target’s own popularity by identifying which other users had “liked” their posts most frequently over recent days

The service had proved popular with some users.

“Great tool to keep track of my teenagers… without them thinking I’m being nosy,” read one review on Apple’s App Store.

But several technology blogs claimed it encouraged “creepy” behaviour.

“Apps such as Like Patrol represent just one of the ways that technology has helped people stalk others,”said security company Malwarebytes.

Instagram itself said the software had violated its policies.

“Like Patrol was scraping people’s data, so we are taking appropriate enforcement action against them,” a spokeswoman said last month.

But Mr Quintero said he did not accept the firm’s criticism.

“There is a strong hypocrisy in Facebook’s condemnation of our app,” he told the BBC,

“Like Patrol does not collect data from Instagram users, it provides the users with a tool to rearrange information that is already available to them.

“Everything the user sees lives only in the user’s device, we do not have a login, we do not centralise any information, if the user deletes the app every bit of data he was able to see in Like Patrol is deleted.”

While Apple has blocked new users from downloading the app, it is not wiping it from iPhones it has already been loaded on to. So, in theory, Like Patrol could continue serving its existing members.

But some are hoping it will now be abandoned.

“This app may well be gone but there are undoubtedly many more still out there,” said Lisa Forte, founder of Red Goat Cyber Security.

“Our data and privacy is valuable. Apps like this one can be hugely intrusive.

“Be very cautious with what apps you decide to download and always keep your phone updated.”

Uber PRs missing the days of Travis Kalanick

Opinion Two years ago, Uber CEO Dara Khosrowshahi was brought in to help the company recover from a long series of ethical and moral lapses. But based on an interview this week, it seems the company’s culture may be rubbing off on him more than he is impacting it.

Pressed on the issue of the Saudi Arabian government’s investment in the company and specifically Uber board member Yasir al-Rumayyan, who represents Saudi Arabia’s Public Investment Fund, Khosrowshahi was notably uncomfortable. But that discomfort soon turned into something far worse.

The journalist in question, Dan Primack of Axios, pushed on the fact that the Saudi government had murdered journalist Jamal Khashoggi at its embassy in Istanbul – an act that has been called a “deliberate, premeditated execution” by the UN – and asked if it appropriate that a representative from that government be on the board of an American company.

Khosrowshahi not only fluffed the response but did something far, far worse: he downplayed the murder of an innocent man, calling it a “mistake”, then compared it directly to his own company’s “mistake” when it ran down and killed a pedestrian in a self-driving car. He then argued that everyone should be forgiven, and defended the Saudi government’s investment in Uber – all while being given multiple opportunities to backtrack. You can see the car-crash interview (pun intended) below.

Youtube Video

It’s easy to see the response as a temporary lapse in judgment under pressure – and indeed that’s what Khosrowshahi has argued in a subsequent response the day after the interview. He tweeted: “I said something in the moment that I do not believe. When it comes to Jamal Khashoggi, his murder was reprehensible and should not be forgotten or excused.”

Money talks

But the fact that Uber CEO’s first instinct was to defend against the murder of a journalist in order to avoid upsetting an investor, and then repeatedly failed to recognize the seriousness of the situation – calling it first a “mistake” and then a “serious mistake” – is an extraordinary indication of the continued lack or morals or ethics at the ride-hailing company.

Presumably in Khosrowshahi’s mind, his attempted pivot to the company’s own responsibility for the death of pedestrian Elaine Herzberg was a way of getting onto firmer ground while also indicating that Uber was taking the situation seriously. But then, in the same sentence, he made it plain that he thinks Uber should be forgiven for its failure to consider the existence of jaywalkers in its software which resulted in Herzberg’s death.

By directly comparing the two, he ended up implying that the Saudi government should be forgiven for its “mistake” of planning the murder and dismemberment of a vocal critic of the country’s leadership.

In short, it was an unbelievably bad response and one that makes you think about an oft quoted study by Australia’s Bond University and a researcher from the University of San Diego that found 21 per cent of senior professionals in the US had “clinically significant” level of psychopathic traits. Which is roughly the same percentage as professional criminals.

Quick test

Is Uber CEO Dara Khosrowshahi a psychopath? Well, let’s take a look at the main traits that the most respected psychopath test – the revised Psychopathic Personality Inventory – looks for:

  • Machiavellian egocentricity – best described as a lack of empathy and sense of detachment from others
  • Social Potency: The ability to charm and influence others
  • Coldheartedness: A distinct lack of emotion, guilt, or regard for others’ feelings
  • Carefree nonplanfulness: Difficulty in planning ahead and considering the consequences of one’s actions
  • Fearlessness: An eagerness for risk-seeking behaviors, as well as a lack of the fear that normally goes with them
  • Blame externalization: Inability to take responsibility for one’s actions, instead blaming others or rationalizing one’s behavior
  • Impulsive nonconformity: A disregard for social norms and culturally acceptable behaviors
  • Stress immunity: A lack of typical marked reactions to traumatic or otherwise stress-inducing events

So how does responding to a question about the gruesome murder of a journalist by calling it a mistake and then equating that mistake to another mistake in which your company killed someone because you have failed to consider an obvious component of driving on roads in your self-driving car program, and then insisting that you be forgiven, come in that listing?

Well, let’s be honest, not well. Still, Khosrowshahi apologized on Twitter so there’s that.

Uber’s share price went up 0.4 per cent today. ®

Sponsored: Beyond the Data Frontier

Educational institutions main target during September spike

Kasperksy researchers have blamed pesky schoolkids for the big September spike in denial-of-service attacks.

They found that more than half of DDoS attacks in the third quarter happen in the month of September. Overall attacks were up just over 30 per cent compared to the second quarter and increased by a similar amount compared to the same period last year.

But unlike other periods, the growth is mostly down to quite simple methods rather than an increase in smart, application-based attacks. That and the targeting of mainly education sites – 60 per cent of stopped attacks were against either schools, universities or electronic journals – led Kaspersky to believe that students are to blame for the uptick.

The Russian security firm said: “We observed a similar picture last year, since it is due to students returning to school and university. Most of these attacks are acts of cyber hooliganism carried out by amateurs, most likely with no expectation of financial gain.”

Alexey Kiselev, biz dev manager on the Kaspersky DDoS Protection team, said: “Despite this spell of seasonal activity from young hooligans, who appear to celebrate the beginning of the school year with a spike in DDoS attacks, the more professional market of DDoS attacks is rather stable. We have not seen an explosive increase in the number of smart attacks.”

Kiselev noted that, whomever was responsible, DDoS attacks can still cause serious and expensive headaches for businesses and other organisations.

Researchers found there is still a substantial role played by DDoS-for-hire websites. Despite efforts by the FBI to take them down, new sites have sprung up in their place.

Kaspersky believes the multiple attacks on World of Warcraft Classic servers in early September was run via automated DDoS-as-a-service websites and that the person arrested for the attacks was likely just a client of such a site rather than a skilled hacker.

Researchers have also noted a geographic shift of DDoS attacks with developing countries playing an increasing role as smartphones and broadband routers become more common. At the same time, cybersecurity awareness continues to increase and better use of defences at provider level in countries where cybercrims have been active for a long time pushes attackers to look for easier pickings. These two factors pushed South Africa into the top 10 ranking for the first time in fourth place behind China, the US and Hong Kong.

In the fourth quarter the security firm expects to see growth in total numbers of attacks, length of attack but also in the number of smart attacks. This will be fuelled by criminals looking to exploit increased commercial activity around Christmas but it expects growth to be fairly moderate as the DDoS market stabilises.

Kaspersky collects data from intercepts from command-and-control servers sent to bots to make its analysis and predictions. ®

Sponsored: Technical Overview: Exasol Peek Under the Hood

Just over 10% of British homes now have a full-fibre broadband connection, a study suggests.

The figures were compiled by consumer broadband advice site Think Broadband.com, which counts only “live” connections in its estimate.

Full-fibre links are among the fastest available, theoretically capable of handling gigabits of data every second.

The 10% landmark was passed late last week and is a significant increase on June, when 8.1% had the fast links.

Some areas of the UK had wider access to full-fibre than others, Andrew Ferguson, from Think Broadband, told BBC News.

And the technology was available to more than of 10% homes in only 100 out of the UK’s 420 council regions.

Kingston upon Hull, Yorkshire, tops the table at 98.7% availability. Belfast is in second place, at 53%, and York third, at 52%.

The steady shift towards full-fibre had come from work done by Openreach, CityFibre, Hyperoptic and Community Fibre to lay cables and offer services, Mr Ferguson said.

The landmark has been passed just as Vodafone has struck a deal with Openreach to offer full-fibre to more than 500,000 homes and businesses in Birmingham, Bristol and Liverpool.

Vodafone said its service would offer speeds of up to 900Mbps.

By contrast, more than 96% of the UK can get superfast broadband, which runs at 24Mbps. Ultrafast connections, which can hit 100Mbps and above, are available to 59% of UK premises.

Reaching the first 10% of coverage had taken about a decade, said Mr Ferguson, but the next 10% would probably be achieved in the next 12-18 months.

“That’s all dependent on how fast the builds go,” he said, “a bad winter may slow things down.”

Gather round for this must-watch vid podcast

Webcast The Register‘s storage editor Chris Mellor will interview Qumulo veep Molly Presley in a webcast set to be streamed on 19 November.

Over the course of the chat, we will explore the developing file-storage landscape, examine how file and object storage can interact, and discuss public cloud services.

Indeed, what role does public cloud play in file storage? Is it a cheap place to stash old files, or can it bring greater benefits to your business?

We hope to answer these questions before diving into the technology, from tiers and flash drives to NVMe and API access to secondary data capabilities and metadata to real-time analytics.

Finally, we’ll probe the explosion in workplace documents: are file systems becoming too complex for mere humans to handle, and can artificial intelligence lend a hand here?

To sign up for this webcast, sponsored by Qumulo, click right here.

Sponsored: What next after Netezza?

Live from Cape Canaveral: El Reg watches Falcon do its stuff while astronomers worry about the skies

The first upgraded batch of Starlink satellites were launched by SpaceX today, marking the fourth reuse of a Falcon 9 booster and the first of a payload fairing.

While there is every possibility the booster could be used again, the fairing halves met a watery end after the company elected to cancel a further recovery attempt.

The Register was there to watch the Falcon 9 head into the Florida sky as the rocket left Cape Canaveral Air Force Station’s Space Launch Complex 40 (SLC-40) at 14:56 UTC.

While it’s no Space Shuttle, the power of the 9 Merlin engines was enough to shake the ground and light up the horizon.

This snap was taken by Alan Page, who had a better camera than this impoverished Reg hack (click to enlarge)

It is also a good deal cheaper and, it would appear, easier to reuse than the orbiter of old. And the noise still managed to ruffle a vulture feather or two as Musk’s finest blasted into the atmosphere.

That booster had previously been used for the Iridium-7, SAOCOM-1A and Nusantara Satu missions. Reusing the rocket over and over again is a key part of SpaceX’s business plan.

At just after the 2-minute-30-seconds mark, the first stage cut off and began its descent to a drone ship stationed out in the Atlantic – sadly a long way out of sight; a return to land was not possible this time around. The second stage then sent the payload of 60 Starlink satellites to the desired altitude of approximately 280km.

The satellites themselves will undergo checkouts by engineers before using their own onboard ion thrusters to head to their required orbits. Since the launch of the first batch of the broadband birds, back in May, SpaceX engineers have upgraded things to maximise the use of both the Ka and Ku bands.

The enhancements have meant that the satellites have bloated out a little, and SpaceX declared that the payload of 60 was the “heaviest” to date.

Worryingly, those upgrades do not seem to have done much for their reliability as SpaceX also admitted that one of the Starlink satellites on the launch was looking a little iffy before the rocket had even left the pad.

That will worry scientists wringing hands about the impact constellations like those planned by SpaceX will have on the sky and neighbouring spacecraft. ESA has already had to dodge one Starlink satellite after Musk’s rocketeers failed to pick up the phone.

If only they had some sort of communications network.

Assuming the failure rate isn’t too alarming, the gang plans to bring service to the US and Canada after six launches, with global internet coverage complete following 24 launches. Whether Earth-dwellers like it or not.

Indeed, scientists have expressed alarm at the prospect of thousands of the things orbiting the planet. At a recent ESTEC event, Mark McCaughrean, senior advisor for Science & Exploration at the European Space Agency, asked attendees to ponder who actually “owned” the night sky amid plans by billionaires to spray Earth orbit with tens of thousands of satellites.

While a Falcon 9 launch and landing is a hugely impressive technical feat, those worries about the impact of Starlink (and its rivals) have not gone away.

As for McCaughrean, he elected to express his opinion via mime.

Quite. ®

Sponsored: Beyond the Data Frontier