Cybersecurity
Authors
News Type
News
Date
Paragraphs

Whether it’s foreign government meddling or corporate hacking, every day brings a new challenge in cybersecurity for the United States, said experts at a recent Hoover Institution media roundtable.

Fifteen members of the national media attended the October 1 discussion, “Outside the Beltway,” which included talks by scholars from the Hoover Institution and Stanford’s Center for International Security and Cooperation (CISAC), cosponsors of the full-day event. The media included the Wall Street Journal, NBC, CNBC, Bloomberg, Reuters, The Hill, CNET, Politico, The Washington Free Beacon, Axios, and RealClear Politics, among others.

Speakers included Herb Lin, Hoover research fellow and senior research scholar at CISAC; Alex Stamos, Hoover visiting scholar and CISAC fellow; Andrew Grotto, Hoover research fellow and research scholar at CISAC: John Villasenor, Hoover senior fellow; Irving Lachow, Hoover visiting fellow; Sean Kanuck, Hoover visiting fellow; Max Smeets, CISAC postdoctoral fellow; and Gregory Falco, CISAC postdoctoral fellow; and others from the private sector.

Cyber conflicts

Grotto said many cyber attackers disguise their operations by using another country’s cyber facilities and locations. He describes this as “third country issues,” which occur in cyberspace as well as on other fronts.

During the Cold War, for example, the Soviet Union used Mexico City as a base of covert operations against the United States, according to Grotto. Even though the United States lodged complaints with the Mexican government, the latter continued to allow the Soviet influence-peddling and spying activities.

Kanuck described how trends in cyberspace are producing outcomes such as “fake news,” “fake” and real crime, European data regulation, and an inability to develop unified, international approaches to cyber deterrence.

Looking ahead, he said we may encounter the first death from a malicious cyber activity, limitations on some cloud services, and continued attempts by foreign and domestic groups to influence elections through social media.

“This is a weapon and a space that is designed to help authoritarians,” Kanuck said, noting that open societies are much more vulnerable to cyber meddling than closed ones that can block websites, and censor and surveil their own people.

Finally, the world will likely one day witness the first official state-sponsored and acknowledged response to a cyberattack, Kanuck added. Russia, for example, is “clearly sending geopolitical signals” about its cyber strengths with its activities.

“Persistent Engagement”

With “bad actors” in Russia, China, North Korea, and Iran, Lin talked about current US policy in cyberspace and the “good, bad, and the ugly” of the situation. The US government’s recently unveiled strategy is dubbed “persistent engagement.”

Lin said that while the United States may understand the cyber domain it operates in, the prescribed “medicine” is not well understood. He explained that such an approach is untested, and it is unclear whether current assumptions about the nature of the threats will reduce cyber threats.

“There is no evidence from history of anywhere else to indicate that this strategy will lead to restraint on the part of the adversaries,” Lin said.

Max Smeets, a CISAC post-doctoral scholar, said it is not certain if current US cyber strategy is actually more aggressive, as not enough operational details exist right now. He spoke about his research into “cyber proliferation” and the difficulty of knowing exactly how many more states than the four noted “bad actors” are building serious offensive capabilities.

“Low Hum”

Asked about Russian election hacking in the 2018 midterms, Kevin Mandia, a speaker and CEO of the cybersecurity company FireEye, said, “We’re not responding to an elevated state of Russian activity right now. It’s more of a low hum,” different than what his company found during the 2016 election season.

He suggested that relying on diplomacy and holding nations accountable—and not following the “a good offense is the best defense” strategy—is a realistic approach to dealing with cybersecurity threats from closed society states.

The roundtable topics covered technical issues in cyber, cyber challenges past and present, cyber conflict, countries, companies, and cyber challenges, among others. The event also included a two-hour simulation exercise with participants assuming the roles of executives at a large, fictitious company that was under a major cyberattack.

 

MEDIA CONTACTS

Clifton B. Parker, Hoover Institution: 650-498-5204, cbparker@stanford.edu


 

 

All News button
1
0
rsd18_083_0009a.jpg

Alex Stamos is a cybersecurity expert, business leader and entrepreneur working to improve the security and safety of the Internet. Stamos was the founding director of the Stanford Internet Observatory at the Cyber Policy Center, a part of the Freeman Spogli Institute for International Studies. He is currently a lecturer, teaching in both the Masters in International Policy Program and in Computer Science.

Prior to joining Stanford, Alex served as the Chief Security Officer of Facebook. In this role, Stamos led a team of engineers, researchers, investigators and analysts charged with understanding and mitigating information security risks to the company and safety risks to the 2.5 billion people on Facebook, Instagram and WhatsApp. During his time at Facebook, he led the company’s investigation into manipulation of the 2016 US election and helped pioneer several successful protections against these new classes of abuse. As a senior executive, Alex represented Facebook and Silicon Valley to regulators, lawmakers and civil society on six continents, and has served as a bridge between the interests of the Internet policy community and the complicated reality of platforms operating at billion-user scale. In April 2017, he co-authored “Information Operations and Facebook”, a highly cited examination of the influence campaign against the US election, which still stands as the most thorough description of the issue by a major technology company.

Before joining Facebook, Alex was the Chief Information Security Officer at Yahoo, rebuilding a storied security team while dealing with multiple assaults by nation-state actors. While at Yahoo, he led the company’s response to the Snowden disclosures by implementing massive cryptographic improvements in his first months. He also represented the company in an open hearing of the US Senate’s Permanent Subcommittee on Investigations.

In 2004, Alex co-founded iSEC Partners, an elite security consultancy known for groundbreaking work in secure software development, embedded and mobile security. As a trusted partner to world’s largest technology firms, Alex coordinated the response to the “Aurora” attacks by the People’s Liberation Army at multiple Silicon Valley firms and led groundbreaking work securing the world’s largest desktop and mobile platforms. During this time, he also served as an expert witness in several notable civil and criminal cases, such as the Google Street View incident and pro bono work for the defendants in Sony vs George Hotz and US vs Aaron Swartz. After the 2010 acquisition of iSEC Partners by NCC Group, Alex formed an experimental R&D division at the combined company, producing five patents.

A noted speaker and writer, he has appeared at the Munich Security Conference, NATO CyCon, Web Summit, DEF CON, CanSecWest and numerous other events. His 2017 keynote at Black Hat was noted for its call for a security industry more representative of the diverse people it serves and the actual risks they face. Throughout his career, Alex has worked toward making security a more representative field and has highlighted the work of diverse technologists as an organizer of the Trustworthy Technology Conference and OURSA.

Alex has been involved with securing the US election system as a contributor to Harvard’s Defending Digital Democracy Project and involved in the academic community as an advisor to Stanford’s Cybersecurity Policy Program and UC Berkeley’s Center for Long-Term Cybersecurity. He is a member of the Aspen Institute’s Cyber Security Task Force, the Bay Area CSO Council and the Council on Foreign Relations. Alex also serves on the advisory board to NATO’s Collective Cybersecurity Center of Excellence in Tallinn, Estonia.

Former Director, Stanford Internet Observatory
Lecturer, Masters in International Policy
Lecturer, Computer Science
Date Label
News Type
Commentary
Date
Paragraphs

Alex Stamos, William J. Perry fellow, wrote the following essay for Lawfare:

In the swirl of news this week, it would be easy to miss recent announcements from two of America's largest and most influential technology companies that have implications for our democracy as a whole. First, on Tuesday morning, Microsoft revealed that it had detected continued attempts at spear-phishing by APT 28/Fancy Bear, the hacking group tied to Russia’s Main Intelligence Directorate (known as the GRU). Later that day, my friends and former colleagues at Facebook unveiled details on more than 600 accounts that were being used by Russian and Iranian groups to distort the information environment worldwide.

The revelations are evidence that Russia has not been deterred and that Iran is following in its footsteps. This underlines a sobering reality: America’s adversaries believe that it is still both safe and effective to attack U.S. democracy using American technologies and the freedoms we cherish.

And why wouldn’t they believe that? In some ways, the United States has broadcast to the world that it doesn’t take these issues seriously and that any perpetrators of information warfare against the West will get, at most, a slap on the wrist. While this failure has left the U.S. unprepared to protect the 2018 elections, there is still a chance to defend American democracy in 2020.

From 2014 until very recently, I worked on security and safety at Yahoo and then at Facebook, both companies on the front line of Russia’s information and cyber-warfare campaign. From that vantage point, the facts are indisputable: There was a multiyear effort by a coalition of Russian agents to harm the likely presidency of Hillary Rodham Clinton and sow deep division in America’s political discourse. The uniformed officers of the GRU and the jeans-wearing millennial trolls of the private Internet Research Agency turned American technology, media and this country’s culture of discourse back against the United States. Stymied by a lack of shared understanding of what happened, the government’s sclerotic response has left the United States profoundly vulnerable to future attacks. As a security leader in my former role at Facebook, my personal responsibility for the failures of 2016 continues to weigh on me, and I hope that I can help elucidate and amplify some hard-learned lessons so that the same mistakes will not be made again and again.

The fundamental flaws in the collective American reaction date to summer 2016, when much of the information being reported today was in the hands of the executive branch. Well before Americans went to the polls, U.S. law enforcement was in possession of forensics from the hacks against the Democratic National Committee; important metadata from the GRU’s spear-phishing of John Podesta and other high-profile individuals; and proactive reports from technology companies. Following an acrimonious debate inside the White House, as reported by the New York Times’s David Sanger, President Obama rejected several retaliatory measures in response to Russian interference—and U.S. intelligence agencies did not emerge with a full-throated description of Russia’s meddling until after the election.

If the weak response of the Obama White House indicated to America’s adversaries that the U.S. government would not respond forcefully, then the subsequent actions of House Republicans and President Trump have signaled that our adversaries can expect powerful elected officials to help a hostile foreign power cover up attacks against their domestic opposition. The bizarre behavior of the chairman of the House Permanent Select Committee on Intelligence, Rep. Devin Nunes, has destroyed that body’s ability to come to any credible consensus, and the relative comity of the Senate Select Committee on Intelligence has not yet produced the detailed analysis and recommendations our country needs. Although by now Americans are likely inured to chronic gridlock in Congress, they should be alarmed and unmoored that their elected representatives have passed no legislation to address the fundamental issues exposed in 2016.

Republican efforts to downplay Russia’s role constitute a dangerous gamble: It is highly unlikely that future election meddling will continue to have such an unbalanced and positive impact for the GOP. The Russians are currently the United States’ most visible information-warfare adversaries, but they are not alone. Their proven playbook is now “in the wild” for anyone to use. Recent history has shown that once a large, powerful nation-state actor demonstrates the effectiveness of a technique, many other groups rush to build cheaper, often more nimble versions of the same capability.

The GRU attacks relied upon well-known social engineering and network intrusion techniques. Likewise, the Internet Research Agency’s trolling campaign required only basic proficiency in English, knowledge of the U.S. political scene available to any consumer of partisan blogs, and the tenacity to exploit the social media platforms’ complicated content policies and natural desire to not censor political speech. After Facebook’s announcement on Tuesday, it is clear that Iran has also followed this playbook. There are many other U.S. adversaries with well-developed cyber-warfare capabilities, such as China or North Korea, that could decide to push candidates and positions amenable to them—including those supported by Democrats and opposed by Republicans. There are also domestic groups that could utilize the same techniques, as many kinds of manipulation might not be illegal if deployed by Americans, and friendly countries might not sit idly by as their adversaries work to choose an amenable U.S. government.

In short, if the United States continues down this path, it risks allowing its elections to become the World Cup of information warfare, in which U.S. adversaries and allies battle to impose their various interests on the American electorate.

Enemies aiming to discredit American-style democracy, rather than promote a specific candidate, will not have to wait for election dynamics like those of 2016, when two historically unpopular nominees fought over a precariously balanced electoral map. Direct attacks against the U.S. election system itself—as opposed to influence operations aimed at voters—were clearly a consideration of U.S. adversaries: There are multiple reports of the widely diffuse U.S. election infrastructure being mapped out and experimentally exploited by Russian groups in 2016. While swinging a national vote in a system run by thousands of local authorities would be highly difficult, an adversary wouldn’t need to definitively change votes to be successful in election meddling. Eliminating individuals from voting rolls, tampering with unofficial vote tallies or visibly modifying election web sites could introduce uncertainty and chaos without affecting the final vote. The combination of offensive cyber techniques with a disinformation campaign would enable a hostile nation or group to create an aura of confusion and illegitimacy around an election that could lead to half of the American populace forever considering that election to be stolen.

While it is much too late to effectively rehabilitate election security for the 2018 midterms, there are four straightforward steps the United States can take to prepare for potential attacks in 2020.

First, Congress needs to set legal standards that address online disinformation. Social media platforms, including my former employer, made serious mistakes in 2016. Tech companies were still using a definition of cyber-warfare focused on traditional hacking techniques—such as spear-phishing or the spreading of malware—and were not prepared to detect and mitigate the propaganda campaigns that were subsequently found and stopped.

Since 2016, many companies have changed their products to deal with misinformation, updated policies to catch inauthentic behavior and created new types of transparency around political ads. Yet it is important to note that companies have undertaken this work voluntarily and could reverse it in the future. And there is a significant gap between the actions of the most criticized companies and those that have flown under the radar: Unlike Facebook and Google, the rest of the massive online advertising industry has kept changes to a minimum.

The Honest Ads Act, introduced by Democratic Sen. Amy Klobuchar and supported by 30 bipartisan co-sponsors, is a good start to setting a legal baseline; however, it must be amended to provide for technical standardization of advertising archives and to set guidelines for the use of massive voter databases by campaigns and political parties. Since the Obama 2012 campaign demonstrated the power of online ad targeting, parties, campaigns and super PACs have finely honed their targeting techniques and regularly run ads specifically designed to influence dozens or hundreds of voters with customized messaging. Americans need to collectively decide how finely political influence campaigns should be allowed to divvy up the electorate, even when those campaigns are domestically run and otherwise completely legal. Congress could also encourage more cooperation between the tech platforms by expanding the protections it granted to share cybersecurity threats to include misinformation actors, as well as by giving legal encouragement to companies to engage academics in joint research projects.

Second, the United States must carefully reassess who in government is responsible for cybersecurity defense. The U.S. has two hyper-competent intelligence and military security organizations in the National Security Agency and U.S. Cyber Command, but both are most broadly focused on offensive operations and face legal restrictions on domestic U.S. operations. The Department of Homeland Security has consolidated a great deal of the defensive responsibilities across multiple sectors, but its cyber capabilities focus on critical infrastructure such as the power grid.  This leaves the FBI as the de facto agency coordinating cyber defense in the United States. While the bureau has many skilled agents and technologists, it is at its core a law enforcement entity that focuses on investigating crimes after they occur, diligently building a case and, eventually, bringing the perpetrators to justice. Prevention certainly has become a bigger focus for the FBI—especially in the terrorism context since 9/11—but the special counsel’s recent indictments for two-year-old Russia actions demonstrate that the general timeline of FBI action does not comport well with preventing attacks in the first place.

The United States should consider following its closest allies in creating an independent, defense-only cybersecurity agency with no intelligence, military or law enforcement responsibility. In the run-up to the most recent French and German elections, the respective cybersecurity agencies of these countries had access to intelligence on likely adversaries, the legal authority to coordinate election protection and the technical chops to work directly with technology platforms. These organizations were independent enough to work directly with the relevant political campaigns, and their uncompromised mandates made them effective partners for multinational tech companies.

Third, each of the 50 states must build capabilities on election protection. While the Constitution gives Congress the ability to regulate elections, traditionally states have jealously guarded this area and eyed federal aid with great suspicion. For states’ autonomy to thrive, it is critical for every state to follow the lead of Colorado and a handful of others in building competent statewide election security teams that set strong standards for verifiable voting, perform security testing of local systems, and provide a rapid-reaction function in case of an attempted attack. The federal government could support the growth of these statewide functions with funding, intelligence and training, and by finding ways to harness the capabilities of private IT workers.

In the long run, it will be impossible to completely prevent any interference in elections. Any system as complicated as one supporting the franchise of more than 200 million registered voters will have serious vulnerabilities. Individual candidates and campaign workers will succumb to professional attacks. And open societies are inherently vulnerable to external influence. This is particularly true in the United States, where the government doesn’t license the official press, empower officials to declare certain topics verboten, jail journalists for reporting on leaked documents, arrest bloggers for questioning the government, or require state IDs to create online accounts. In 2016, the most effective Russian propaganda was that which was carried in the pages of the New York Times and the Washington Post and repeated 24/7 on the cable news channels. The GRU successfully leveraged stolen information to entice the media to cover the anti-Clinton stories it preferred, and there is no way to prevent or limit that kind of influence while also respecting the rights of a free press.

The fourth step necessary is one that can be driven only by the demands of the American citizenry: Americans must demand that future attacks be rapidly investigated, that the relevant facts be disclosed publicly well before an election, and that the mighty financial and cyber weapons available to the president be utilized immediately to punish those responsible. This might seem like a far stretch under President Trump, but recent efforts by members of his administration to prepare for the midterms demonstrate that public pressure could encourage a meaningful response despite the current occupant of the Oval Office.

The attacks against U.S. political discourse aim to undermine citizens’ confidence, create chaos and jeopardize the legitimacy of the American government. With the right political will and cooperation, the United States can demonstrate that 2016 was an aberration and that the U.S. political sphere will not become the venue of choice for the latest innovations in global information warfare. The world—including America's enemies—is watching. 

 

All News button
1
Paragraphs

Across the world, states are establishing military cyber commands or similar units to develop offensive cyber capabilities. One of the key dilemmas faced by these states is whether (and how) to integrate their intelligence and military capabilities to develop a meaningful offensive cyber capacity. This topic, however, has received little theoretical treatment. The purpose of this paper is therefore to address the following question: What are the benefits and risks of organizational integration of offensive cyber capabilities (OIOCC)? I argue that organizational integration may lead to three benefits: enhanced interaction efficiency of intelligence and military activities, better(and more diverse) knowledge transfer and reduced mission overlap. Yet, there are also several negative effects attached to OIOCC.  It may lead to 'cyber mission creep' and an intensification of the cyber security dilemma. It could also result in arsenal cost ineffectiveness in the long run. Although the benefits of OIOCC are seen to outweighs the risks, failing to grasp the negative effects may lead to unnecessary cycles of provocation, with potentially disastrous consequences.

 
All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Defence Studies
Authors
Authors
News Type
News
Date
Paragraphs

Stamos joins the Hoover Institution and the Center for International Security and Cooperation at the Freeman Spogli Institute for International Studies. Former Facebook chief security officer, Alex Stamos, to bring rich real-world perspective on cybersecurity and technology policy.

 

Stanford University’s Freeman Spogli Institute for International Studies and the Hoover Institution announced today the appointment of Alex Stamos as a William J. Perry Fellow at the Center for International Security and Cooperation (CISAC), Cyber Initiative fellow, and Hoover visiting scholar.

Stamos, a computer security expert and the outgoing chief security officer at Facebook, will engage in teaching, research and policy engagement through CISAC and the Hoover Institution's Cyber Policy Program as well as the Stanford Cyber Initiative. Drawing on his considerable experience in the private sector, he will teach a graduate level course about the basics of cyber offense and defense to students without technical backgrounds as part of the Ford Dorsey Master’s in International Policy program at the Freeman Spogli Institute, which houses CISAC.

"With our country facing unprecedented challenges in digital interference with the democratic process and numerous other cybersecurity issues, Alex’s experience and perspective are a welcome addition to our group of fellows,” said Freeman Spogli Institute Director Michael McFaul.

In his role, Stamos will also engage in research projects aimed at public policy initiatives as a member of the Faculty Working Group on Information Warfare. The working group will develop, discuss and test concepts and theories about information warfare, as well as conduct applied research on countermeasures to identify and combat information warfare. The working group will also develop policy outreach in briefings to government officials, public seminars and workshops, Congressional testimony, online and traditional media appearances, op-eds and other forms of educating the public on combatting information warfare.

“We are thrilled that Alex is devoting even more energy to our cyber efforts,” said CISAC Co-Director Amy Zegart. “He's been a vital partner to the Stanford cyber policy program for several years and his Stanford "hack lab"--which he piloted in Spring 2018--is a cutting-edge class to train students in our new master’s cyber policy track. He brings extraordinary skills and a unique perspective that will enrich our classes, research, and policy programs."

Over the past three years, the Hoover Institution and CISAC have jointly developed the Stanford Cyber Policy Program.  Its mission is to solve the most important international cyber policy challenges by conducting policy-driven research across disciplines, serving as a trusted convener across sectors, and teaching the next generation. The program is led by Dr. Amy Zegart and Dr. Herbert Lin. Stamos has participated on the advisory board of the program since its inception.

“We look forward to working with Alex on some of the key cyber issues facing our world today," said Tom Gilligan, director of the Hoover Institution. "He brings tremendous experience and perspective that will contribute to Hoover’s important research addressing our nation’s cyber security issues.”

“I am excited to join Stanford and for the opportunity to share my knowledge and expertise with a new generation of students--and for the opportunity to learn from colleagues and students across many disciplines at the university,” said Stamos.

A graduate of the University of California, Berkeley, Stamos studied electrical engineering and computer science. He later co-founded a successful security consultancy, iSEC Partners, and in 2014 he joined Yahoo as its chief information security officer. Stamos joined Facebook as chief security officer in June 2015, where he led Facebook’s internal investigation into targeted election-related influence campaigns via the social media platform.

###

About CISAC: Founded in 1983, CISAC has built on its research strengths to better understand an increasingly complex international environment. It is part of Stanford's Freeman Spogli Institute for International Studies (FSI). CISAC’s mission is to generate knowledge to build a safer world through teaching and inspiring the next generation of security specialists, conducting innovative research on security issues across the social and natural sciences, and communicating our findings and recommendations to policymakers and the broader public. 

About the Hoover Institution: The Hoover Institution, Stanford University, is a public policy research center devoted to the advanced study of economics, politics, history, and political economy—both domestic and foreign—as well as international affairs. With its eminent scholars and world-renowned Library & Archives, the Hoover Institution seeks to improve the human condition by advancing ideas that promote economic opportunity and prosperity and secure and safeguard peace for America and all mankind.

About the Stanford Cyber Initiative:  Working across disciplines, the Stanford Cyber Initiative aims to understand how technology affects security, governance, and the future of work.

Media contact: Katy Gabel, Center for International Security and Cooperation: 650-725-6488, kgabel@stanford.edu

 

All News button
1
News Type
News
Date
Paragraphs

Conversational software programs might provide patients a less risky environment for discussing mental health, but they come with some risks to privacy or accuracy. Stanford scholars discuss the pros and cons of this trend.

 

Interacting with a machine may seem like a strange and impersonal way to seek mental health care, but advances in technology and artificial intelligence are making that type of engagement more and more a reality. Online sites such as 7 Cups of Tea and Crisis Text Line are providing counseling services via web and text, but this style of treatment has not been widely utilized by hospitals and mental health facilities.

Stanford scholars Adam MinerArnold Milstein and Jeff Hancockexamined the benefits and risks associated with this trend in a Sept. 21 article in the Journal of the American Medical Association. They discuss how technological advances now offer the capability for patients to have personal health discussions with devices like smartphones and digital assistants.

Stanford News Service interviewed Miner, Milstein and Hancock about this trend.

Read more: https://news.stanford.edu/2017/09/25/scholars-discuss-mental-health-technology/

Hero Image
All News button
1
0
danilkerimi.jpg

Danil Kerimi is currently leading the World Economic Forum’s work on Internet governance, evidence-based policy-making, digital economy, and industrial policy. In addition, he manages Global Agenda Council on Cybersecurity. Previously, Mr. Kerimi led Forum’s engagement with governments and business leaders in Europe and Central Asia, was in charge of developing the Forum’s global public sector outreach strategy on various projects on cyberspace, including cyberresilience, data, digital ecosystem, ICT and competitiveness, and hyperconnectivity. Before joining the Forum, Mr. Kerimi worked with the United Nations Office on Drugs and Crime/Terrorism Prevention Branch, the Organization for Security and Cooperation in Europe, the International Organization for Migration, and other international and regional organizations.

0
aaeaaqaaaaaaaat5aaaajdq2yzmyowm2ltflngmtngyzzi1hndg3ltc4y2fhy2njztc3mw.jpg

Rick is the CSO for Palo Alto Networks where he is responsible for the company’s internal security program, the oversight of the Palo Alto Networks Threat Intelligence Team and the development of thought leadership for the cyber security community. His prior jobs include the CISO for TASC, the GM of iDefense and the SOC Director at Counterpane. He served in the U.S. Army for 23 years and spent the last 2 years of his career running the Army’s CERT. Rick holds a Master of Computer Science degree from the Naval Postgraduate School and an engineering degree from the U.S. Military Academy. He taught computer science at the Military Academy and contributed as an executive editor to two books: “Cyber Fraud: Tactics, Techniques and Procedures” and “Cyber Security Essentials.”

0
794.jpg

Celso Guiotoko serves as Corporate Vice President for Nissan. He started his professional career in Information Technology in 1983 when he joined BRADESCO Brazilian bank before joining Andersen Consulting LLP in 1985 working in Sao Paulo, Chicago WHQ and Tokyo office.

In addition to his activities in the business world, from 1986 to 1988, he became Assistant Professor for Information Technology, at the Universidade Estadual de Sao Paulo where he also supervised the Internship Programme.

In 1996 he joined Toshiba America Electronic Components in North America as the Director of Information System, moving to i2 Technologies in Japan as the head of Consulting Service at the end of 1997.

Celso Guiotoko joined Nissan Motor Co., Ltd. in May 2004 as Vice President in charge of the Global IS Division and was promoted to Corporate Vice President of the Division in April 2006.

In June 2009, he added the role of Managing Director in charge of IS/IT functions for the Renault-Nissan Alliance. His tasks are to maximise the synergies in IS/IT functions and identify potential synergies in Alliance business systems.

Celso Guiotoko was born in January 1959 in Brazil. He attended the Escola Politecnica – Civil Engineering and the Faculdade de Economia e Administracao – Accounting Science of the Universidade de Sao Paulo in Brazil.

0
beckstrom.png

Rod Beckstrom is a well-known cybersecurity authority, Internet leader and expert on organizational leadership. He is the former President and CEO of ICANN, the founding Director of the U.S. National Cybersecurity Center and co-author of the best-selling book The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations. He is a frequent international media commentator and public speaker.
Rod currently serves as an advisor to multinational companies and international institutions. Mr. Beckstrom is a member of the World Economic Forum’s Global Council on Future of Government.

Subscribe to Cybersecurity