登入帳戶  | 訂單查詢  | 購物車/收銀台(0) | 在線留言板  | 付款方式  | 聯絡我們  | 運費計算  | 幫助中心 |  加入書簽
會員登入   新用戶註冊
HOME新書上架暢銷書架好書推介特價區會員書架精選月讀2023年度TOP分類閱讀雜誌 香港/國際用戶
最新/最熱/最齊全的簡體書網 品種:超過100萬種書,正品正价,放心網購,悭钱省心 送貨:速遞 / 物流,時效:出貨後2-4日

2024年10月出版新書

2024年09月出版新書

2024年08月出版新書

2024年07月出版新書

2024年06月出版新書

2024年05月出版新書

2024年04月出版新書

2024年03月出版新書

2024年02月出版新書

2024年01月出版新書

2023年12月出版新書

2023年11月出版新書

2023年10月出版新書

2023年09月出版新書

『英文書』Big Data (A Revolution That Will Transform How We Live Work and Think 大数据 风投基金宽带资本董事长田朔宁 微软中国董事长张亚勤等知名人士联合推荐 英国金融时报及美国高盛集团年度商业上榜图书)

書城自編碼: 2035301
分類: 簡體書→原版英文書
作者: Viktor
國際書號(ISBN): 9781848547919
出版社: Orion
出版日期: 2013-02-01
版次: 1 印次: 1
頁數/字數: 242/
書度/開本: 16开 釘裝: 平装

售價:NT$ 1096

我要買

share:

** 我創建的書架 **
未登入.



新書推薦:
粤港澳大湾区舆论引导与舆情应对精品案例:基于媒介化治理的思考
《 粤港澳大湾区舆论引导与舆情应对精品案例:基于媒介化治理的思考 》

售價:NT$ 445.0
大地的勇士
《 大地的勇士 》

售價:NT$ 340.0
中华老学·第十一辑
《 中华老学·第十一辑 》

售價:NT$ 380.0
债务之网:瑞士民商法的发展历史(1800-1900)
《 债务之网:瑞士民商法的发展历史(1800-1900) 》

售價:NT$ 345.0
ARM嵌入式Linux系统开发详解(第3版)
《 ARM嵌入式Linux系统开发详解(第3版) 》

售價:NT$ 595.0
长寿时代:做自己人生的CFO
《 长寿时代:做自己人生的CFO 》

售價:NT$ 310.0
早点知道会幸福的那些事
《 早点知道会幸福的那些事 》

售價:NT$ 295.0
知宋·宋代之货币
《 知宋·宋代之货币 》

售價:NT$ 340.0

建議一齊購買:

+

NT$ 910
《 The New Digital Age: Reshaping the Future of People, Nations and Business 》
+

NT$ 1253
《 The Black Swan: The Impact of the Highly Improbable 》
+

NT$ 753
《 Zero to One: Notes on Startups, or How to Build the Future 》
+

NT$ 377
《 The Everything Store: Jeff Bezos and the Age of Amazon 》
+

NT$ 654
《 Thinking, Fast and Slow 》
+

NT$ 464
《 大数据时代(《对话》节目“谁在引爆大数据?”展开热议。迄今为止全世界最好的一本大数据专著,宽带资本董事长田溯宁、微软全球资深副总裁张亚勤、知名IT评论人谢文等10位专家联袂推荐) 》
編輯推薦:
近日正在发酵的美国“棱镜门”时间使得更多中国人意识到:“棱镜门”把中国信息安全带到了一个更宏观层面,云计算和大数据背景下,要更加重视数据安全,过去我们只是对一个小的系统或者设备做风险评估。但是,局部的风险一旦累加起来,尤其在大数据时代,通过零散信息可能会拼接出一个重要的信息。
大数据时代,网络上流动的信息日渐主宰国家的运转和命脉,一些看似不相关的数据,在大数据的综合与深度挖掘下,可能会泄露出关系到国家的重要信息。
在大数据时代,许多看似无关的数据经过整理分析都可能成为重要的机密数据,这些数据的泄露比围墙里面的高度机密数据泄密更可怕。美国前中情局长彼得雷乌斯曾对“大数据”特点做了解释,他说,举个例子,这里有人刚买了东西,那里有人刚打了一个电话,表面上两者可能没甚么关系,但通过“大数据”的分析,就可能发现这里涉及海关与移民问题,两者是有联系的。
美国中情局也宣称,美国已通过大数据的分析成功阻止了近几年来针对美国及其盟友总计超过50起的恐怖袭击,这些被发现的袭击主要是由美国以外的恐怖分子策划的。

所以针对此次斯诺登事件,对于美国利用大数据监控全球网络用户的正确还是错误,读者读完此
內容簡介:
A revelatory exploration of the hottest trend in technology
and the dramatic impact it will have on the economy, science, and
society at large.
Which paint color is most likely to tell you that a used car is
in good shape? How can officials identify the most dangerous New
York City manholes before they explode? And how did Google searches
predict the spread of the H1N1 flu outbreak?
The key to answering these questions, and many more, is big
data. “Big data” refers to our burgeoning ability to crunch vast
collections of information, analyze it instantly, and draw
sometimes profoundly surprising conclusions from it. This emerging
science can translate myriad phenomena—from the price of airline
tickets to the text of millions of books—into searchable form, and
uses our increasing computing power to unearth epiphanies that we
never could have seen before. A revolution on par with the Internet
or perhaps even the printing press, big data will change the way we
think about business, health, politics, education, and innovation
in the years to come. It also poses fresh threats, from the
inevitable end of privacy as we know it to the prospect of being
penalized for things we haven’t even done yet, based on big data’s
ability to predict our future behavior.
In this brilliantly clear, often surprising work, two leading
experts explain what big data is, how it will change our lives, and
what we can do to protect ourselves from its hazards. Big Data is
the first big book about the next big thing.
關於作者:
VIKTOR MAYER-SCH?NBERGER is Professor of Internet Governance
and Regulation at the Oxford Internet Institute, Oxford University.
A widely recognized authority on big data, he is the author of over
a hundred articles and eight books, of which the most recent is
Delete: The Virtue of Forgetting in the Digital Age. He is on the
advisory boards of corporations and organizations around the world,
including Microsoft and the World Economic Forum.
KENNETH CUKIER is the Data Editor of the Economist and a
prominent commentator on developments in big data. His writings on
business and economics have appeared in Foreign Affairs, the New
York Times, the Financial Times, and elsewhere.
內容試閱
1
NOW
IN 2009 A NEW FLU virus was discovered. Combining elements of the
viruses that cause bird flu and swine flu, this new strain, dubbed
H1N1, spread quickly. Within weeks, public health agencies around
the world feared a terrible pandemic was under way. Some
commentators warned of an outbreak on the scale of the 1918 Spanish
flu that had infected half a billion people and killed tens of
millions. Worse, no vaccine against the new virus was readily
available. The only hope public health authorities had was to slow
its spread. But to do that, they needed to know where it already
was.
In the United States, the Centers for Disease
Control and Prevention CDC requested that doctors inform them of
new flu cases. Yet the picture of the pandemic that emerged was
always a week or two out of date. People might feel sick for days
but wait before consulting a doctor. Relaying the information back
to the central organizations took time, and the CDC only tabulated
the numbers once a week. With a rapidly spreading disease, a
two-week lag is an eternity. This delay completely blinded public
health agencies at the most crucial moments.
As it happened, a few weeks before the H1N1 virus
made headlines, engineers at the Internet giant Google published a
remarkable paper in the scientific journal Nature. It created a
splash among health officials and computer scientists but was
otherwise overlooked. The authors explained how Google could
“predict” the spread of the winter flu in the United States, not
just nationally, but down to specific regions and even states. The
company could achieve this by looking at what people were searching
for on the Internet. Since Google receives more than three billion
search queries every day and saves them all, it had plenty of data
to work with.
Google took the 50 million most common search terms
that Americans type and compared the list with CDC data on the
spread of seasonal flu between 2003 and 2008. The idea was to
identify people infected by the flu virus by what they searched for
on the Internet. Others had tried to do this with Internet search
terms, but no one else had as much data, processing power, and
statistical know-how as Google.
While the Googlers guessed that the searches might
be aimed at getting flu information?—?typing phrases like “medicine
for cough and fever”?—?that wasn’t the point: they didn’t know, and
they designed a system that didn’t care. All their system did was
look for correlations between the frequency of certain search
queries and the spread of the flu over time and space. In total,
they processed a staggering 450 million different mathematical
models in order to test the search terms, comparing its predictions
against actual flu cases from the CDC in 2007 and 2008. And they
struck gold: their software found a combination of 45 search terms
that, when used together in a mathematical model, had a strong
correlation between their prediction and the official figures
nationwide. Like the CDC, they could tell where the flu had spread,
but unlike the CDC they could tell it in near real-time, not a week
or two after the fact.
Thus when the H1N1 crisis struck in 2009, Google’s
system proved to be a more useful and timely indicator than
government statistics with their natural reporting lags. Public
health officials were armed with valuable information.
Strikingly, Google’s method does not involve
distributing mouth swabs or contacting physicians’ offices.
Instead, it is built on “big data”?—?the ability of society to
harness information in novel ways to produce useful insights or
goods and services of significant value. With it, by the time the
next pandemic comes around, the world will have a better tool at
its disposal to predict and thus prevent its spread.

Public health is only one area where big data is making a big
difference. Entire business sectors are being reshaped by big data
as well. Buying airplane tickets is a good example.
In 2003 Oren Etzioni needed to fly from Seattle to
Los Angeles for his younger brother’s wedding. Months before the
big day, he went online and bought a plane ticket, believing that
the earlier you book, the less you pay. On the flight, curiosity
got the better of him and he asked the fellow in the next seat how
much his ticket had cost and when he had bought it. The man turned
out to have paid considerably less than Etzioni, even though he had
purchased the ticket much more recently. Infuriated, Etzioni asked
another passenger and then another. Most had paid less.
For most of us, the sense of economic betrayal
would have dissipated by the time we closed our tray tables and put
our seats in the full, upright, and locked position. But Etzioni is
one of America’s foremost computer scientists. He sees the world as
a series of big-data problems?—?ones that he can solve. And he has
been mastering them since he graduated from Harvard in 1986 as its
first undergrad to major in computer science.
From his perch at the University of Washington, he
started a slew of big-data companies before the term “big data”
became known. He helped build one of the Web’s first search
engines, MetaCrawler, which was launched in 1994 and snapped up by
InfoSpace, then a major online property. He co-founded Netbot, the
first major comparison-shopping website, which he sold to Excite.
His startup for extracting meaning from text documents, called
ClearForest, was later acquired by Reuters.
Back on terra firma, Etzioni was determined to
figure out a way for people to know if a ticket price they see
online is a good deal or not. An airplane seat is a commodity: each
one is basically indistinguishable from others on the same flight.
Yet the prices vary wildly, being based on a myriad of factors that
are mostly known only by the airlines themselves.
Etzioni concluded that he didn’t need to decrypt
the rhyme or reason for the price differences. Instead, he simply
had to predict whether the price being shown was likely to increase
or decrease in the future. That is possible, if not easy, to do.
All it requires is analyzing all the ticket sales for a given route
and examining the prices paid relative to the number of days before
the departure.
If the average price of a ticket tended to
decrease, it would make sense to wait and buy the ticket later. If
the average price usually increased, the system would recommend
buying the ticket right away at the price shown. In other words,
what was needed was a souped-up version of the informal survey
Etzioni conducted at 30,000 feet. To be sure, it was yet another
massive computer science problem. But again, it was one he could
solve. So he set to work.
Using a sample of 12,000 price observations that
was obtained by “scraping” information from a travel website over a
41-day period, Etzioni created a predictive model that handed its
simulated passengers a tidy savings. The model had no understanding
of why, only what. That is, it didn’t know any of the variables
that go into airline pricing decisions, such as number of seats
that remained unsold, seasonality, or whether some sort of magical
Saturday-night-stay might reduce the fare. It based its prediction
on what it did know: probabilities gleaned from the data about
other flights. “To buy or not to buy, that is the question,”
Etzioni mused. Fittingly, he named the research project Hamlet.
The little project evolved into a venture
capital-backed startup called Farecast. By predicting whether the
price of an airline ticket was likely to go up or down, and by how
much, Farecast empowered consumers to choose when to click the
“buy” button. It armed them with information to which they had
never had access before. Upholding the virtue of transparency
against itself, Farecast even scored the degree of confidence it
had in own predictions and presented that information to users
too.
To work, the system needed lots of data. To improve
its performance, Etzioni got his hands on one of the industry’s
flight reservation databases. With that information, the system
could make predictions based on every seat on every flight for most
routes in American commercial aviation over the course of a year.
Farecast was now crunching nearly 200 billion flight-price records
to make its predictions. In so doing, it was saving consumers a
bundle.
With his sandy brown hair, toothy grin, and
cherubic good looks, Etzioni hardly seemed like the sort of person
who would deny the airline industry millions of dollars of
potential revenue. In fact, he set his sights on doing even more
than that. By 2008 he was planning to apply the method to other
goods like hotel rooms, concert tickets, and used cars: anything
with little product differentiation, a high degree of price
variation, and tons of data. But before he could hatch his plans,
Microsoft came knocking on his door, snapped up Farecast for around
$110 million, and integrated it into the Bing search engine. By
2012 the system was making the correct call 75 percent of the time
and saving travelers, on average, $50 per ticket.
Farecast is the epitome of a big-data company and
an example of where the world is headed. Etzioni couldn’t have
built the company five or ten years earlier. “It would have been
impossible,” he says. The amount of computing power and storage he
needed was too expensive. But although changes in technology have
been a critical factor making it possible, something more important
changed too, something subtle. There was a shift in mindset about
how data could be used.

 

 

書城介紹  | 合作申請 | 索要書目  | 新手入門 | 聯絡方式  | 幫助中心 | 找書說明  | 送貨方式 | 付款方式 台灣用户 | 香港/海外用户
megBook.com.tw
Copyright (C) 2013 - 2024 (香港)大書城有限公司 All Rights Reserved.