Category Archives: Technology

.

Samsung Explains Note7 Failure and Promises will Do Better

In addition, Samsung will conduct a multilayer safety measures protocol on all its devices. It will cover the overall design and materials, as well as device hardware strength. Further, it will ensure that software algorithms are in place for safer battery charging temperatures.

“Samsung is doing the right thing. It took its time, but eventually it got enough instances of failed batteries in the lab to figure out what the technical issue was,” said Roger Kay, principal analyst at Endpoint Technologies Associates.

“At the same time, Samsung has been relatively forthcoming about the results and taking responsibility,” he told TechNewsWorld.

“The first thing Samsung had to do was make it clear that it understood the core of the Note7 fire, and it had to ensure that it won’t happen again,” noted Ian Fogg, senior director for mobile and telecoms at IHS Markit.

“It had to make creditable assurances to customers, vendors and retailers that this wouldn’t happen with future models,” he told TechNewsWorld. “The announcement today addressed both of those issues.”

Passing the Buck?

Although Samsung addressed what it will do to help avoid future problems, the company did not take full blame for the issue.

“There is plenty of blame to go around here,” said Ramon Llamas, IDC’s research manager for wearables and mobile phones.

“When [Samsung] asks a supplier to ramp up production on batteries to meet demand, there is blame as well,” he told TechNewsWorld.

“The next test will be when they unveil a new flagship phone; they will have to showcase how innovative it is in terms of features, but also address the power in the way it charges and its reliability,” said IHS Markit’s Fogg.

“That will be part of the next announcement for any of its products, as the lithium ion batteries are used in so many products –from laptops to phones and even to cars,” he added.

Galaxy Delays

What is also likely to come out of Samsung’s new emphasis on battery safety is a delay of its next flagship handset, likely the Galaxy S8. It had been expected to make its debut at the upcoming Mobile World Congress in Barcelona, Spain, next month, but that now appears unlikely.

“If they don’t make MWC, that breaks from tradition — but we can expect that the phone is still coming,” said Llamas.

In fact, such a delay could be met positively, as “taken in this context, [it shows] that the company is making sure it does things right next time,” said Endpoint’s Kay.

“The public is both sophisticated about understanding that technical failures occur and ADD enough to forget the past pretty quickly — so if Samsung gets the next one right, all will be forgiven,” he said.

“It is a right step, but that’s all it is,” said Llamas. “Samsung will need to make multiple steps to regain consumer confidence. They’ve identified the problem, but now they need to ensure that this doesn’t happen again.”

.

Asus Tinker Board Now Joins Raspberry Pi

Just when you thought Raspberry Pi couldn’t be knocked from its market-leading perch, along comes Asus with a rival device that may give the Pi a run for its relatively little money.

Asus just launched its own low-cost computer, the Tinker Board, which is being sold in the UK and continental Europe for about US$57. Its features could interest open source enthusiasts in doing a little comparison shopping before deciding on a new device.

The Tinker Board features a quad-core 1.8GHz ARM Cortex A-17 CPU with ARM Mali-T764 graphics.

The device includes four USB 2.0 ports, a 3.5 mm audio jack connection, CSI port for camera connection, a DSI port for HD resolution, a micro SD port and contact ports for PWM and S/PDIF signals.

The Tinker Board supports the Debian OS with Kodi.

A power supply is not included.

Rival or Response

“The Asus Tinker Board is not so much competition as extension of the Raspberry Pi ecosystem, and deeper it shows an extensible ARM ecosystem as well,” said Paul Teich, principal analyst at Tirias Research.

The Tinker Board runs a faster processor and like the Pi 3 model, implements WiFi and bluetooth wireless connectivity, he noted.

“I don’t believe anyone in the Raspberry Pi ecosystem is writing or using 64-bit software, so the Pi model 3 upgrade to ARMv8 is a bit mystifying, other than the BCM2837 processor was cheap, fast and available now,” Teich told LinuxInsider.

“The Asus part is substantially more powerful and uses about 25 percent more power,” observed Rob Enderle, principal analyst at the Enderle Group.

The Asus system outputs 4K video, while Raspberry Pi uses HD, he noted.

“This means the Asus part will perform far better when the performance requirement is higher and the need to keep energy cost down is lower,” Enderle told LinuxInsider.

The embedded space has proven to be relatively lucrative and can be a jumping-off point for even bigger markets and technology partnerships, so it’s likely other manufacturers will enter this space as well, he suggested.

Power Play

The release comes about a week after the release of the Compute Model 3 from Raspberry Pi. That model is aimed squarely at expanding the range of the device to industrial uses and for the growing IoT audience.

The Compute Model 3’s standard model is priced at $30, and the Compute Model Lite is priced at $25. It has the same processor and RAM as the standard, but brings the SD card interface to the module pin, which allows users to connect it to an eMMc or SD card.

The original Raspberry Pi’s price was reduced to $25 when the Compute Model 3 launched.

There has been demand in certain industries for a low-cost open source computer that provides robust capabilities for manufacturing and technical demands.

“We don’t see much mainstream enterprise demand for this type of compute model,” said Jay Lyman, principal analyst at 451 Research, following the Compute Model 3’s release last week.

However, he told LinuxInsider, “we do think it is an attractive model for researchers and other HPC end users that are able to assemble and manage powerful compute capabilities for much less money and resources than is typically associated with supercomputing.”

.

Here China Aims to Wash VPNs Out of Its Hair

China this week announced new measures to further restrict its citizens’ access to the Internet.

The 14-month campaign appears designed to crack down on the use of Web platforms and services unapproved by the government, and on virtual private networks, which can used to access those platforms and services covertly.

While China’s Internet network access services market is facing many development opportunities, there are signs of “disorderly development” that show the urgent need for regulation, the country’s Ministry of Industry and Information Technology explained in a notice posted to a government website.

The coming “clean-up” of China’s network access services will standardize the market, strengthen network information security management, and promote the healthy and orderly development of the country’s Internet industry, the ministry noted.

In order to operate legally, Internet service providers, VPN providers, data centers and content delivery networks will have to obtain a license from the government and adhere to strict limitations.

Great Firewall

The clean-up also places severe new restrictions on cross-border business activities. It requires that government approval be obtained to create or lease lines, including VPN channels, to perform cross-border business activities.

Those restrictions essentially will block any Chinese citizen from using a VPN — basically, hiding their IP address and rerouting their connections to servers outside their country — in order to access websites the government doesn’t want them to see.

China is famous for controlling the information its citizens can see on the Internet with its “Great Firewall,” which screens Internet traffic between China and the outside world. Any requests to see information Beijing deems inappropriate are sent to an Internet graveyard.

Among the 171 of the world’s top 1,000 websites the Great Firewall blocks are Google, Facebook and Twitter, according to Greatfire.org, a censorship monitoring service. VPNs offer a way to get through the firewall, which is why the government wants to block them.

China also has taken a more proactive approach to dealing with websites that it doesn’t like. It crafted a Great Cannon, which it uses to launch DDoS attacks on domains critical of Beijing.

Shaping the Narrative

China’s government has attempted to restrict VPN access in the past, particularly at sensitive times, such as when the national Communist party convenes. Such a meeting is scheduled for the end of this year.

The great clean-up may be a departure from the past, however.

“This new directive may be a sign that the restrictions might become more systematic,” said Cynthia Wong, senior Internet researcher at Human Rights Watch.

In the past, enforcement of VPN restrictions seemed spotty. Sometimes they worked; sometimes they didn’t.

“Part of the problem with censorship in China is it’s often opaque,” Wong told TechNewsWorld.

“Users are often left wondering why their VPN isn’t working. Is it because of technical problems or is it because of the government?” she wondered. “This needs to be viewed as part of a broader crackdown on any kind of independent media by the Chinese government. In recent years, the government has doubled down on efforts to restrict any information that diverges from its official narrative.”

VPNs are used for many purposes in China, though — among them to keep companies’ discussions about their intellectual property and market strategies secure.

“I would hope industry pushes back on this, because it will be much more difficult to run innovative businesses in China without full access to information,” Wong said. “It’s in their interest for this to be a concern for them, and they should be concerned about corporate espionage as well.”

Not Good for VPN Sales

Once the great clean-up gets under way, it’s going to be difficult to sell VPNs in China.

“What they’re saying is they want to listen in on VPN connections,” explained Glenn Chagnot, vice president of marketing for Uplevel Systems. “In order to meet that requirement, we’d have to re-architect our product.”

That’s because with Uplevel’s product, the encryption keys reside with the user, so the VPN provider has no way of decrypting the user’s traffic.

Uplevel has limited its sales to the United States because selling VPNs internationally can be challenging, Chagnot noted.

“The technical requirements vary from country to country,” he told TechNewsWorld. “What works for the U.S. doesn’t necessarily work for Europe and doesn’t necessarily work for China.”

Asked if more and more countries are seeking the power to snoop on VPNs, Chagnot replied, “Absolutely.”

.

Tools You Need To Customizing a Computer

There are plenty of reasons to build a custom computer. While custom computers may initially be more expensive than prepackaged desktops or laptops, they can provide you with nearly endless possibilities, whether you’re looking for a top-notch gaming machine, a system for mixing music, or the ideal choice for developing Web applications.

A custom computer is the way to go if you want both performance and flexibility. Upgrading individual parts often is less expensive than buying a new computer, which could save you money in the long run.

Following are the essential parts you’ll need.

Processor and Motherboard

The component to start with is the processor, which will dictate your selection of other necessary parts, like the motherboard. UserBenchmark’sexhaustive list of user-rated processors is a good resource to help you decide. AMD and Intel are the top manufacturers, but I prefer Intel.

Intel is the industry standard when it comes to processors, so you can’t go wrong if that’s your choice. Its Core series comes in three families: i3, i5 and i7. The i3 series is good for average computing needs, while the i5 offers a little more horsepower. The i7 series offers you the best performance. For the price, a Core i7-6700k really can’t be beat.

After you choose your processor, select a motherboard to go with it. Make sure it is USB 3.1/3.0-capable for optimal speed. One factor to consider is whether you plan on overclocking, which involves running your PC at a speed higher than manufacturer recommendations.

While you can benefit from short-term performance boosts, overclocking may lead to a shorter lifespan for your computer, so you’ll need to consider a compatible motherboard if you plan to do it.

Storage and Memory

Next, choose the storage you want to use. HDD drives are the traditional hard drives that most computers have, and they are extremely affordable.

However, SSDs are the choice for sheer performance, and their prices are dropping.

I prefer a hybrid option that includes both. A computer built with its system files on a smaller SSD will boot faster, while a larger and cheaper HDD in the 2-TB range gives plenty of storage.

Decide how much RAM you need. If you plan on running a 32-bit OS, then you only need 3 GB of memory since the OS won’t support any more. Most likely, though, you will be using a 64-bit architecture where 4 GB is the minimum.

RAM is a relatively cheap upgrade for the performance you get in return. Choose 8, 12, or 16 GB for a better user experience.

You can also put in a DVD/CD drive, though it is not necessary, thanks to portable storage and cloud-based software.

Video and Audio Cards

If you intend to play video games, create digital graphics or edit video, you should invest in something more advanced than a basic video card.

For enhanced graphics, AMD, ATI or Nvidia cards will do the trick. The AMD Radeon RX 460 is a reasonably affordable option that can also handle the needs of most casual gamers.

The same goes for your audio card: If you are editing audio files, you should always opt for a higher-quality card that’s compatible with the peripheral equipment you want to connect.

Case, Power and Cooling

You have to buy a case to hold all of that amazing hardware! There are many types of cases on the market with different features. Many cases have a rudimentary power supply and cooling fans. However, if you are building a high-performance system, they are probably inadequate.

All that performance generates heat. Too much heat will cause your computer to crash and may even damage hardware, so be sure to invest in some quality computer cooling fans.

At a minimum, you will want one attached to your CPU heatsink, one larger fan to exhaust heat from the case — and if not built in, one to disperse heat from your graphics card.

The more powerful your components, the more power you’ll need to run your system properly. You don’t want to burn through a cheap power supply and have everything shut down on you.

Plan on at least a 500w power supply, but if you’ve opted for a bigger processor, graphics card, and the requisite fans, you’ll need something with more juice. Your components may come with recommended power allowances. If not, I suggest at least a 750w power supply.

Final Thought: Don’t be afraid to invest more money up front, as your custom machine can provide years of use before you’ll need to upgrade it again. Good luck with your project — and most of all, have fun!

.

Apple Now Formally Joins High-Powered AI Partnership

The Partnership on AI to Benefit People and Society on Friday announced that Apple, well known for its culture of secrecy, has joined the organization as a founding member.

The other founding members are Amazon, Facebook, Google/Deep Mind, IBM and Microsoft.

The group also announced the final composition of its inaugural board of trustees, naming six new independent members: Dario Amodel of Open AI, Subbarao Kambhampati of the Association for the Advancement of Artificial Intelligence, Deirdre Mulligan of UC Berkeley, Carol Rose of the American Civil Liberties Union, Eric Sears of the MacArthur Foundation, and Jason Furman of the Peterson Institute of International Economics.

They will join Greg Corrado of Google/DeepMind, Tom Gruber of Apple, Ralf Herbrich of Amazon, Eric Horvitz of Microsoft, Yann Lecun of Facebook, and Francesca Rossi of IBM.

Gradual Buildout

The group plans to announce additional details sometime after the board’s Feb. 3 meeting in San Francisco, including how other organizations and individuals can join. It also will address initial research programs and activities.

The board will oversee general activities of the Partnership on AI, and an executive steering committee will commission and evaluate activities within the overall objectives and scope set up by the board of trustees. The board will appoint an executive director, who will oversee day-to-day operations.

The Partnership on AI, announced last fall, aims to advance public understanding of artificial intelligence and formulate best practices. It plans to conduct publish research under an open license on areas such as ethics, privacy, fairness, inclusivity, transparency and privacy.

Closely Held

The announcement of Apple’s participation is particularly significant in light of the company’s well-earned reputation for organizational secrecy. There recently have been signs of blowback against that corporate culture, both inside and outside of the organization.

Apple last fall hired Carnegie Mellon’s Russ Salakhutdinov as its first director of AI research, and he soon announced a policy change that would allow the company’s AI researchers to begin publishing the results of their work, a practice that previously had been out of bounds for Apple employees.

As for why Apple decided to join the partnership now, “Apple does things if and when it wants to, on its own timeline,” observed Charles King, principal analyst at Pund-IT.

“The company may also have wanted to see how the group’s members were organizing themselves, whether they were serious, and how sustainable the effort appeared” before it took that step, he told TechNewsWorld.

Common Interests

Tom Gruber and others at Apple have been working behind the scenes, “communicating and collaborating” with members of the board since before it launched last fall, said company rep Jenny Murphy.

“Apple provided input into the organization’s [memorandum of understanding] and the organization’s tenets,” she told TechNewsWorld. “Apple wasn’t able to formalize its membership in time for the September announcement, but is thrilled now to be officially joining PAI as a founding member.”

It makes sense that Apple would join, as the partnership is about communicating AI to consumers and policymakers, noted Paul Teich, principal analyst at Tirias Research.

“Apple has been very secretive about their AI efforts until quite recently,” he told TechNewsWorld, “so this is another indication that Apple is trying to cooperate with the rest of the industry in presenting a common front regarding the promise of AI for non-technologists.”

.

Now BlackBerry, Microsoft make the Ever-Smarter Connected Car

a couple of interesting briefings last week, BlackBerry announced that its turnaround was finished, and Microsoft finally provided some information on its new connected car deliverables.

One strange thing was that after CEO John Chen excitedly pointed out that BlackBerry had displaced Microsoft in Ford, he then announced a strategic initiative to work more closely with Microsoft’s Azure platform on BlackBerry’s own market-leading QNX car operating system. That showcased not only the massive changes in both companies, but also the really strange way this market is evolving.

I’ll close with my product of the week: a very low-cost wearable smartphone display that could get you through your next dentist appointment or boring sermon.

BlackBerry’s QNX

With all the focus on the coming autonomous car and on BlackBerry’s old phone business, most don’t know that QNX, the operating system that BlackBerry acquired, is dominant in the car market, largely for car operations. To give you an idea, it currently is in 60M — yes, that’s million — cars. It is ranked No. 1 in telematics and automotive software entertainment, and its main advantage is that it just works and continues to meet all of the car companies’ start of product deadlines.

Chances are that if you like the software running the different parts of your car, it is QNX. The car companies like it because it is very secure, their own software developers know it (it’s been dominant for a number of years), and it works on both 32- and 64-bit hardware platforms from folks like Intel, Qualcomm and Nvidia.

As you’d expect from any modern system, it is set up for over-the-air updates, similar to Tesla. In effect, QNX has become the equivalent of Android or Windows, but for the car — and it dominates the segment.

BlackBerry introduced the Karma folks at the briefing to talk about their most advanced offerings. Karma is what became of Fisker — the firm that tried, with some¬†significant drama, to compete with Tesla. (Karma also showcased why Tesla’s decision to use Panasonic batteries turned out to be brilliant.)

Karma’s current car is physically identical to the Fisker Karma, with the exception that all of the electronics have been revamped completely, so it now is reliable. (I was an old Jaguar mechanic, and given the issues the Fisker had with its electronics, I’ve always wondered if they were done by Lucas electrics, which were almost always at the heart of Jaguar reliability issues in the 1960s and 70s.)

However, I spoke to the Karma executives at the event, and their point was that it took them only 15 days to bring up the software on their redeveloped car. (By the way, the new Karma is about US$140K, and it is still a looker.)

BlackBerry currently is pivoting to support the next generation of technology, which includes autonomous vehicles.

Now you’d think that Microsoft and BlackBerry would be at each other’s throats. While BlackBerry has pivoted away from focusing exclusively on secure phones and email, Microsoft has pivoted away from its focus on tools and operating systems.

Microsoft: It’s About Azure

Microsoft has changed a lot over the last several years, since Satya Nadella has been running the firm. I really didn’t get that initially, so the company had to set up a special retraining meeting. I had covered Microsoft for so long that my brain apparently was hard coded to think of it in just one way — and it isn’t that company any more.

Microsoft’s big push with automotive is with Azure now, which is a good thing, because its in-car efforts over the last two decades weren’t that great.

I actually had an AutoPC for a number of years. It was very advanced for its time, but truly flawed — so much so that my wife still threatens to throw something at me — and in those early years it was the AutoPC — if I ever suggest something like that again.

To be fair, that product was crippled by an underperforming processor — but I have to say, there is no misery like having a GPS system that can’t navigate at anything exceeding 25 miles per hour. Of course, I put the AutoPC in her new car, not my own, which in hindsight likely wasn’t that wise. (Yes, I’m also often surprised I’m still married.)

Anyway, Microsoft’s current effort is to provide the cloud resource for cars, so that updates, maintenance, and even some cloud intelligence can be delivered — bad weather, reported accidents, or even neighborhoods in flux and potentially unsafe — in order to improve not only the car maker’s connection to the car, but also the driver’s connection to the car maker.

You see Tesla — particularly when it pulled an Apple and got hundreds of thousands of pre-orders for a car that wouldn’t arrive for years — woke up the traditional vendors with a holy crap moment. Now they realize that if they can’t provide some level of better-connected support in the next few years, they’ll likely be gone.

Those companies don’t trust Google at all, for the most part, (which is largely why Google had to spin out its own car effort), and they appear to be more comfortable with Microsoft than with Amazon.

Wrapping Up

Both BlackBerry and Microsoft have changed a lot, and while BlackBerry is focused like a laser on operating systems and related services, Microsoft largely has pivoted to the cloud as its premier platform, and that means they can partner much more easily going forward.

In effect, the firms have pivoted away from each other in terms of focus — and that means rather than having a cage match to the death, they now can cooperate to create something far more powerful.

It is possible that with their combined autonomous car efforts, these companies could do together what neither firm could do individually, and thus dominate the next generation of ever-smarter cars. Go figure.

The Vufine+, which started out as a Kickstarter project, is basically a small head-mounted display, priced at around $180, that’s useful for watching video or looking at your PC, tablet or smartphone screen while you are moving about.

I wouldn’t suggest using this while driving, running or bike riding, though, as it isn’t a ton less distracting then holding up your phone when watching a video. It is a rechargeable device, but battery life is about an hour and a half, suggesting that if you want to use it longer you should carry a long USB cable and a cellphone booster battery.

There are some downsides: It isn’t wireless, and it works well only on your right eye (cables stick up if you put it on your left). Unlike Google’s Glass, which used a projector, this is a screen — so lining it up can be a tad difficult the first time.

However, Glass cost something like $1,500 and this is $180, so it is a great little device to figure out what works for you and what doesn’t.

Some people use their Vufine+ to see both their drone and what it sees, but my ideal use is in a dentist’s office where I can watch a movie while having my teeth cleaned and not be bored to tears.

At $180, if I accidentally lost it I wouldn’t be likely to have a coronary.

It works with phones and tablets that have HDMI out and the same with laptops. It comes with an HDMI to Micro-HDMI cable, so clearly it’s expected that you’ll start out with a laptop, even though you are more likely to use a Micro-HDMI to Micro-HDMI cable (about $10 on Amazon).

Oh, don’t forget headphones or earbuds — wireless if you don’t have a separate HDMI jack or headphone jack on your device — and be aware, your laptop may see an audio component in the device and switch your speakers to it so you can’t hear sound. Go into your control panel and switch that to whatever you actually want to use, or you’ll be listening to the sounds of silence.

The Vufine+ solved a problem for me: It gave me something that allowed me both to watch video and pay attention to the oral hygienist, transforming what always has been an incredibly boring experience (I get really tired of watching the ceiling fan spin) into a more interesting, far faster-moving one. As a result, the Vufine+ is my product of the week.

.

Three Teams Qualify for Tube Test

Elon Musk’s hyperloop dream began to take shape in reality last weekend as 27 teams, including six from outside the United States, participated in a competition to create the mass transit vehicle of the future.

The competition in Hawthorne, California, sponsored by SpaceX, which Musk founded, attracted teams made up mostly of students who created pods designed to run on hyperloop transportation systems.

In a hyperloop system, the vehicles, or pods, travel in a vacuum in tubes at speeds close to the speed of sound. To do that, the pods have to be suspended slightly off the ground, typically by riding on a magnetic field.

For its competition, SpaceX built a test chamber that was three-quarters of a mile long and six feet wide. The company capped the speed at which a pod could go at around 50 miles per hour.

In order to get to test its pod in the vacuum chamber, a team had to pass a rigorous 101-point review. Only three teams could do that: Delft University of Technology of The Netherlands; Technical University of Munich, Germany; and the Massachusetts Institute of Technology.

Kind of a Drag

Operating in a vacuum is important to hyperloop systems because it reduces friction. “Hyperloop is all about friction,” said Adonios Karpetis, a faculty advisor to the Texas A&M aerospace team, which competed at the event.

“You have to minimize the air friction in the tube,” he told TechNewsWorld.

By creating a vacuum or near-vacuum in the tube, the drag of the vehicle is nearly eliminated, which allows it to reach tremendous speeds, as high as 700 miles per hour. By contrast, a Boeing 747 has a cruising speed of 570 miles per hour.

“It’s like operating a ground-based vehicle at an altitude of 100,000 feet where the air is very thin,” said Rick Williams, an advisor to Auburn University’s hyperloop team.

A hyperloop vehicle has an advantage over an aircraft, though.

“Once the vehicle reaches its cruising speed, it will coast for a long ways because of the minimal drag,” Williams told TechNewsWorld.

“From an energy standpoint, it’s going to be significantly lower,” he said.

Defining the Unknown

A team didn’t have to use the chamber with the vacuum enabled, though, to learn about improving its pod’s design.

“We could see the different engineering approaches taken by the teams and talk with them about their pods,” explained David Goldsmith, an assistant professor at Virginia Tech University, which participated in the competition.

“We saw pods that had gone down different developmental paths, and we saw pods that had gone down the same path we had,” he told TechNewsWorld. “That was invaluable.”

The competition also helped the teams define the parameters of their knowledge.

“We now know what we don’t know,” Texas A&M’s Karpetis remarked. “We did not know a lot of little things, from electronics, to breaking, to levitation, to batteries.”

Journey as Destination

Students also learned the value of dealing with the unexpected.

“We learned a lot about being flexible,” said Claire Holesovsky, Badgerloop operations director at the University of Wisconsin – Madison.

“A lot of times, things don’t go your way, so you have to have alternative plans and think quickly on the spot,” she told TechNewsWorld.

The teams now are looking forward to this summer, when the next round of the competition will be held. They’ll be able to take what they learned in this last round and use it to improve their pods for the next one.

While there are many people who would embrace the idea of a 30-minute hyperloop ride from Los Angeles to San Francisco, Carlo Ratti, director of theSensesable City Lab at the Massachusetts Institute of Technology isn’t one of them.

“The other week I was in London and I had to go to Paris. I could have traveled from the city center to Heathrow Airport, flown 45 minutes, and then taken another transfer from Charles de Gaulle airport to the city center,” he said.

“This the kind of experience that hyperloop proposes between San Francisco and Los Angeles, with long transfers to suburban stations and then a short trip in a small, dark tube,” Ratti told TechNewsWorld.

“Instead, I enjoyed very much spending two hours on the Eurostar. I was online, the comfortable seat became my workplace during the trip, and I could enjoy the gorgeous English and French landscape all around,” he explained.

“As our trains — and tomorrow our self driving cars — become an extension to our offices, homes or even bedrooms, shouldn’t we focus on making them more comfortable and point-to-point, instead of thinking about such a 20th century idea of fast travel between large hubs?” Ratti asked. “Then, the journey could really become the destination.”

.

News Tech Industry make Reacts to Trump’s Immigration Order

Uber CEO Travis Kalanick on Thursday resigned from President Trump’s business advisory council amid fierce blowback against the president’s recent executive order on immigration, and in the wake of reports that several major Silicon Valley firms, including Facebook, Apple, Microsoft and Google, have been circulating a draft letter opposing Trump’s action.

Kalanick said he no longer would participate in the council after consumers railed against Uber for continuing to operate at John F. Kennedy International Airport over the weekend. The Taxi Workers Alliance in New York had gone on strike, refusing to pick up fares at the airport, to protest Trump’s executive order on refugee resettlement and travel from seven predominantly Muslim nations.

“Earlier today I spoke briefly with the President about the immigration executive order and its issues for our community,” Kalanick wrote in a memo to employees. “I also let him know that I would not be able to participate on his economic council. Joining the group was not meant to be an endorsement of the President or his agenda, but unfortunately it has been misinterpreted to be exactly that.”

In the memo, Kalanick said he was proud to work with Thuan Pham, Uber’s CTO, and Emil Michael, the company’s senior vice president of business, both of whom are refugees who “came here to build a better life for themselves.”

A number of major tech players have been contemplating the publication of an open letter protesting not only Trump’s so-called Muslim ban, but also other proposed changes that they fear could damage their companies’ ability to conduct business around the world, according to published reports.

The draft letter, first reported by ReCode and Bloomberg News, calls on the Trump administration to reconsider several key policies in addition to the halt in refugee resettlement into the U.S.

The letter also urges the administration to reconsider its policies with regard to Dreamers — that is, children of illegal immigrants who face deportation and the breakup of their families, a group that President Obama wanted to protect.

The Silicon Valley leaders’ concerns go to the heart of the ability of their firms to recruit staff and conduct business. Many of their top executives, as well as professional programmers and engineers, have been recruited from Asia, the Middle East, and other parts of the world that are impacted directly by the Trump administration’s recent and proposed executive orders.

Thousands of Silicon Valley professionals work under H-1B visas that allow highly skilled foreigners to remain in the U.S. as long as they continue to work for the companies that recruited them.

After Trump signed the immigration ban, it was not only foreigners attempting to reach the U.S. for the first time who were impacted. Visa holders who were traveling overseas on business also were caught up in the chaos.

Key Contributors

Immigrants are twice as likely as native-born Americans to start new businesses, said Arnobio Morelix, senior research analyst and program officer in research and policy at the Ewing Marion Kauffman Foundation.

More than half of the billion-dollar startups in the U.S. were launched by immigrants, and 70 percent of those unicorn companies have immigrants as key members of their management or product development teams, he told TechNewsWorld.

More than 40 percent of Fortune 500 companies were launched by immigrants and their children, and Silicon Valley is the metro area with the most immigrant entrepreneurs in the country, the foundation’s data shows. Immigrants account for 41.9 percent of entrepreneurs in the San Jose metro area.

It’s unlikely that the Trump administration will back down on the immigration issue, despite the concerns raised by the technology industry in the draft letter that’s been circulating, according to Rob Enderle, principal analyst at the Enderle Group.

“I doubt it,” he told TechNewsWorld, as “the president is pretty set on his plan, and it was a key campaign promise.”

It such an open letter were published, it might upset Trump to the point where he would rescind his pledge to help the technology industry by cutting back on government regulations, Enderle feared.

Secondary Strategy

It’s doubtful the letter tech leaders reportedly are signing would have any effect on the president’s opinion or his executive order, suggested Charles Kind, principal analyst at Pund-IT.

“In the first two weeks of his administration, Trump has shown himself to be willful, combative and quick to take offense — none of which are qualities one associates with a desire to seek compromise,” he told TechNewsWorld.

That said, “simply writing the letter could be as important for tech companies as finding a way to get through to the administration,” King continued.

“The fact is that the impact of the executive order on Silicon Valley’s employment of foreign-born engineers is just one of the elements in play here. More important will be the effect that Trump’s unilateral actions and ‘America first’ intentions have on foreign markets, many of which are crucial to the current and future health of U.S. tech companies,” he explained.

“If Mr. Trump sparks crises,” said King, “including trade wars with formerly friendly allies — and it seems likely that he will — the immigration letter signers’ willingness to confront the president could help them maintain their good standing with trusted partners and customers who are otherwise threatened by the administration’s policies.”

.

What do you think Apple’s Outstanding Q1 a Fluke?

Apple had a good quarter, but if you look under the numbers there is a ton of trouble. It just dropped behind Google in brand value, and some analysts have predicted valuation will crater in a few months.

The iPhone 7 did well, but that shouldn’t be a surprise, given that its biggest competitor, Samsung, saw its phone literally go up in flames last quarter. The Galaxy 8 is coming, though — maybe sooner than anyone expects — and it looks really impressive. This suggests Apple’s one-up quarter is not repeatable, unless Samsung decides burning phones is a feature. (Just think of the marshmallows you could roast right in your car! Or “the Samsung S8 Burns Faster, Better, Hotter!”)

Apple has moved aggressively to get suppliers to cut costs — even going to court in an effort to move some of Qualcomm’s profit to its own bottom line (this rarely ends well). Apple apparently hit a wall on top line growth, even though it is threatening to raise prices. (Good luck with that, because raising prices in a very competitive market ALWAYS ends well.)

Looking back, the Steve Jobs cycle really worked only once. It may not be Tim Cook’s fault it is failing — maybe working once was all it could do.

I’ll explain and then close with my product of the week: the Microsoft Surface Book 2, the halo product in the tablet family that is going up as the traditional iPad goes down.

The Apple Cycle

The Apple product cycle was an amazing thing to watch — not least because it showcased how folks could have, but didn’t, compete with the once unbeatable iPod. When the iPod was at its peak, Sony, Samsung, Dell — and even Microsoft, with the Zune — tried to make a dent in its sales, but they bounced off the product like it was made of diamonds.

The only product that even worried Steve Jobs was a prototype from HP, and he was able to trick HP’s then CEO Carly Fiorina into licensing the iPod instead. Then he really took advantage of her. Imagine how different history would have been for both HP and Carly if, instead of being screwed by Apple, HP had been the only company to displace the iPod. Maybe Fiorina, not Jobs, would have been CEO of the decade. (OK, I doubt it too.)

As it happened, Jobs saw that the real risk to the iPod was the emergence of smartphones that could do what the iPod did better. Instead of defending the iPod, he did what Microsoft and Palm should have done, leading Apple to create the best MP3/phone bundle.

That was particularly embarrassing for Microsoft’s Steve Ballmer, because he’d disagreed with his internal team, which had wanted to do the same thing instead of creating the Zune. Ballmer wasn’t alone, however. Palm’s then CEO also killed a similar effort, saying something like “smartphones are just for business.” There is some irony in HP buying Palm and then burning it to the ground, given its iPod mistake.

The iPod became the iPhone — an even bigger hit — and suddenly we had what looked like an amazingly powerful cycle, which worked pretty well for a decade.

When the Apple Cycle Broke

That successful cycle broke with the iPad. You see, the iPhone was an iPod-plus, so the next product in the cycle should have built on the iPhone — but it didn’t. The iPad is built on the iPod — it basically is an iPod with a bigger screen.

The iPhone already had made the iPod redundant, and the iPhone’s screen eventually grew, so instead of the iPad being an extension of the iPod, the iPhone became an extension of both. Instead of the iPad expanding the market like the iPhone did, it peaked and then went into an impressive nosedive.

Granted, the iPad Pro, which is sort of trying to be a blend of the iPad and MacBook, is having some success — but largely as a result of Microsoft’s Surface efforts. It arguably is doing a better job of slipstreaming the Surface than Zune did the iPod, but a huge hit it isn’t.

Then the Apple Watch came along, breaking Apple’s naming convention. It basically is a small iPod touch with limitations in screen size, features and platform compatibility. The iPod worked with Windows and the MacOS; the Apple Watch should work with Android as well as iOS but doesn’t. As a result, the Apple Watch is a crippled wearable iPod, and there should be no surprise it isn’t selling that well, even though it is considered one of the best smartwatches in the segment.

Wrapping Up: One-Trick Wonder

What all of this means is that Steve Jobs really only got this right once. Granted, he was increasingly sick after the iPhone and was gone for the Apple Watch, so he might have figured it out had he been alive and well. This makes me wonder if it even would be possible to extend the iPhone further. Could you create an iWonder product that would expand the smartphone to embrace the PC, for instance?

That is what Microsoft imagined with Continuum — the idea that a smartphone truly could become a PC — and what makes this ironic is that once again, it didn’t execute on what might have been a true iPhone replacement.

If Jobs were around, I’d bet that’s where he would go. What then would be the next step? Maybe some kind of Hololens-like product that could eliminate virtually everything else in its final form? I wonder who is going to get that right?

The most memorable launch of this decade, for me, was the Surface Book. That’s because it was introduced as a laptop computer — and to look at it, it is hard to tell the screen becomes a tablet. With a flourish and the release of an electronic latch, the presenter mirrored what Jobs used to do with his “one more thing” surprise, and I haven’t seen people get so excited about a PC since the 90s.

I carried the Surface Book for months, and I had just one major complaint — that it couldn’t play any decent games. When you travel as much as I do, there is a lot of down time in airports or between meetings when time just drags. It’s even worse on long flights, when you can’t even move for hours.

I do read a lot, but the way I can burn through hours is with video games. The typical trade-off is that you either get a notebook that is thin and light with good battery life, or you get one that is heavy and thick with lousy battery life, but that plays games. Games don’t pay the bills for me, though.

What is amazing about the Surface Book 2 is that it significantly ups the graphics performance and increases battery life by a third — from 12 to 16 hours — while adding only one-third of a pound of weight.

Granted, you still aren’t at gaming laptop speeds, but this one now runs my current favorite game, Ashes of the Singularity, while the old one wouldn’t.

The Surface Book has never been a cheap date. It costs around US$2K for the sweet spot configuration of an i7 and 256M SSD drive, but the Surface Book 2 adds only $400 for a far more capable product with the same options.

It has a couple of shortcomings. To get it out before Christmas, Microsoft had to miss Intel’s latest Kaby Lake processor. Also, it doesn’t have USB-C ports — just the older USB 3.0 configuration. Still, given that most of what I have is still USB 3.0-compatible and that Kaby Lake was a minor upgrade, neither has been a problem.

Carrying the Surface Book is like carrying art. To my eye, it is arguably the best-looking laptop in the market. It can transform into a very attractive tablet that I rarely use — but most important, it will play games! As a result, the Surface Book 2 is my new carry box and my product of the week. It is a pretty thing.

.

Linux Helped an Empowered Computer User

If you were to ask any of my friends, they could readily attest to my profound passion for Linux. That said, it might surprise you to know that hardly two years ago, I barely knew what Linux was, let alone had any earnest interest in switching to it from Windows.

Although a shift as dramatic as this may seem astonishing when considered in hindsight, analyzing my path from one push or influence to the next paints a more telling picture. It is with this approach that I want to share my story of how I came to not only use, but indeed champion, the Linux desktop.

My Security Awakening

Before embarking on my journey two years ago, I was just an ordinary Windows user. While I was basically competent and tried to keep abreast of mainstream tech news, I had an unremarkable knowledge of computers.

My attitude quickly began to change in light of the reporting on the intelligence programs of the National Security Agency in the summer of 2013. The breadth of the online monitoring Edward Snowden revealed was unsettling, but it also underscored just how little most of us do — or even know how to do — to safeguard our own privacy.

Whereas before I previously gave no particular consideration to computers or their role in my personal affairs, I came to realize the critical importance of taking control of one’s digital life, and of the devices that power it.

The logical next step was to determine exactly how to go about it. Though my goal seemed logical, achieving it would not be simple. Over the next few months I devoted my free time to scouring the Internet for guides on deploying privacy protections, encryption, and any other techniques that could protect me.

Experts will tell you that if you’re trying to evade intelligence agencies, you should give up. Yet those same experts will tell you that your only recourse for resisting even a fraction of state surveillance — and a decent proportion of monitoring by lesser agencies more likely to target ordinary people — is to use open source software. Linux, I soon found, was chief among those software options.

Proprietary vs. Open Source

Upon further study, I became familiar with just what was so special about open source software. The lion’s share of the software we use every day — from chat clients to operating systems, including Windows — is the opposite of open source software: It’s proprietary.

When Microsoft developers work on Windows, for example, they write the source code in some programming language and circulate this code only among their team. When they’re ready to release the software, they compile it, converting it from human-readable code into the 1s and 0s that computers need to run it, but which even the most brilliant humans struggle to reverse-engineer to original source code.

With this model, the only people who know for sure exactly what the software does, or whether it surreptitiously undermines or monitors its users, are the people who wrote it.

Open source software, though, is released to the public in its source code form, along with downloadable binary packages for installation. Whether or not every individual user is capable of reading the source code to assess its security and privacy, the fact that it is public means that those with enough technical chops are free to do so, and they can notify users if the program contains hidden malicious processes or inadvertently buggy ones.

After thorough research, it became clear that the only operating system that could guarantee my privacy and autonomy as a user was one that offered the transparency of the open source philosophy. The one knowledgeable friends and privacy advocates recommended most was Linux. I was ready to endure a rough transition if I had to, but my conviction in the importance of privacy gave me the confidence to try.

Although my resolve to switch to Linux was immediate, the process of migrating to it was gradual. I started by installing Ubuntu — an easily configured and beginner-friendly Linux distribution — to run side-by-side with the existing Windows installation on my aging laptop.

By being able to choose Ubuntu or Windows each time I booted up my computer, I was able to find my footing on Linux while preserving the familiar refuge of Windows in case the former was missing direly needed functionality.

As it turned out, a fatally corrupted hard drive prevented me from enjoying this setup for long, but I took that as an opportunity to pick out a new laptop with Linux in mind. As its standard Intel set of processors, graphic cards, and wireless adapters work well with Linux’s drivers, I went with a Lenovo ThinkPad.

I made a fresh start, completely wiping Windows from my new machine in favor of Debian, a widely compatible and stable distribution on which Ubuntu is based. More than merely surviving without the familiar Windows safety net, I thrived. I was soon immersing myself in the previously mysterious command line world.

After I had a year of working with Linux under my belt, I took another plunge and installed Arch Linux, which requires a significantly more complex manual user installation process, with full disk encryption. That night installing Arch, with a Linux veteran supervising, marked one of the proudest accomplishments in my life.

I had my share of challenges along the way — sometimes applications that worked seamlessly on Windows required laborious extra steps or missing drivers needed installation — but I surmounted them or sidestepped them and continued to figure out Linux at my own pace.

As far as I had come, it was then that I really started to learn. I took up Linux to harness the power of my computer and ensure that it worked for me, but what kept me engaged was the freedom of modification and personalization that it offered.

As an open source OS, Linux is infinitely open to customization. Although I initially expected to spend my time reading up on security practices (which I very much still do), I also found myself digging deep into configuration panels and laying out all the colors, icons and menus just so.

It took some getting used to, but the more I threw myself into something new, the more confident — and curious — I became.

A little over two years since setting out down this road, I’ve never felt more at home on my computer than I do today. I couldn’t personalize Windows the way I wanted it, and from what I’ve learned from the open source community, I couldn’t fully trust it, either.

What was once simply a piece of hardware I owned is now something I have a close connection to — not unlike the connection a journalist has to her notebook or a violinist to her instrument.

I even find myself lamenting the fact that my phone is not as amenable to true Linux as my laptop and wondering what I can do about that. In the meantime, though, I will keep tinkering away on my Arch system, discovering new corners and exploring new possibilities whenever the chance arises.