The 5th Annual Henry Taub International TCE Conference

  • Map Unavailable

    Date/Time
    Date(s) - 01/06/2015 - 02/06/2015
    All Day

    Categories No Categories


    Banner Presentations

    Scaling Systems for Big Data 
    Scalable, Reliable and Secure Systems
    June 1-2, 2015
    Churchill Auditorium, Technion, Haifa

     

    Research Workshop of the Israel Science Foundation (ISF)
      PnV Conference Graphics Related Events First Day Gallery  Second Day Gallery

     

    Conference chairs:
    Idit Keidar (EE Technion) & Eran Yahav (CS Technion)


    Speakers include:

    Lorenzo Alvisi,
    UT Austin
    Salt: Combining ACID and BASE in a Distributed Database
    Omer Barkol,
    HP
    The Machine changes … software stacks
    Emery Berger,
    UMass Amherst   
    Programming Language Technology for the Sciences
    Ran Bittmann,
    SAP
    Trends in Democratization of Hybrid Computing
    Ran Canetti,
    Tel Aviv University
    Cryptographic Software Obfuscation and Applications
    Keren Censor-Hillel, Technion Are Lock-Free Algorithms Practically Wait-Free? 
    Peter Druschel,
    MPI-SWS, Germany
    Ensuring Compliance in Large-scale Data Systems
    Shelly Garion, 
    IBM Research
    Yaron Weinsberg, 
    IBM Research

    Auditing, Security and Data Analytics for Cloud Object Stores
    Patrice Godefroid, Microsoft Research Automated Software Testing for the 21st Century
    Michael Kagan, Mellanox Enabling the Use of Data 
    Alex Kesselman, 
    Google
    Building Scalable Cloud Storage
    Robert O'Callahan, Mozilla Corporation Taming Nondeterminism
    Martin Odersky,
    EPFL Switzerland
    Compilers are Databases
    Rotem Oshman,
    Tel Aviv University
    Information-Theory Lower Bounds in Distributed Computing
    Radia Perlman,
    EMC Corporation
    Making data be there when you want it and gone when you want it gone
    Wolfgang Roesner, 
    IBM Systems
    Software Methods meet Large-Scale System-on-a-Chip Design: the Arrival of Aspect-Orient Design 
    Mooly Sagiv,
    Tel Aviv University
    Verifying Correctness of  Statefull Networks
    Benny Schnaider, Ravello Infrastructure Independent Application Life Cycle
    Bianca Schroeder, University of Toronto Programming paradigms for massively parallel computing: Massively inefficient?
    Mark Silberstein, Technion Accelerator-Centric Operating Systems: Rethinking the Role of CPUs in Modern Computers 
    Martin Vechev,
    ETH Zurich, Switzerland
    Machine Learning for Programming 
    Ayal Zaks, Intel  Compiling for Scalable Computing Systems – the Merit of SIMD 
    Andreas Zeller, Saarland University, Germany  Guarantees from Testing 
     

    Sponsored by

    Image Map

     In collaboration with

    Afeka
    IATI logo
    IEEEsystor2015

    Media Sponsors
    SagivTech
    Amdocs


    Lorenzo Alvisi, UT Austin
    Salt: Combining ACID and BASE in a Distributed Database

    What is the right abstraction to support scalable and available storage and retrieval of data in a distributed database?
    Today's options—ACID transactions and BASE implementations—force developers to compromise either ease of programming or performance. This talk will discuss Salt, a new database that allows the ACID and BASE paradigms to coexist in order to combine the desirable qualities of both. Salt is based on the observation, rooted in Pareto's principle, that, when an application outgrows the performance and availability offered by an ACID implementation, it is often because of the requirements of only a few transactions: most transactions never test the limits of what ACID can offer. Through the new abstraction of BASE transactions, Salt allows to safely “BASE-ify'' only those few performance-critical ACID transactions, without compromising the ACID guarantees enjoyed by the remaining transactions: in so doing, Salt can reap most of the performance benefits of the BASE paradigm, without unleashing the cost and complexity that traditionally come with it.
    Brief Bio
    Lorenzo Alvisi holds an Endowed Professorship in Computer Science at the University of Texas at Austin, where he co-leads the Laboratory for Advanced Systems Research (LASR). He received a Ph.D. in Computer Science from Cornell University, which he joined after earning a Laurea degree in Physics from the University of Bologna, Italy. His research interests are in the theory and practice of distributed computing, with a particular focus on dependability. He is a Visiting Chair Professor at Shanghai Jiao Tong University, a Fellow of the ACM, an Alfred P. Sloan Foundation Fellow, and the recipient of a Humboldt Research Award and of an NSF Career Award. He serves on the Editorial Boards of ACM TOCS and Springer’s Distributed Computing and is a council member of the CRA’s Computing Community Consortium. In addition to distributed computing, he is passionate about classical music and red Italian motorcycles.  

    Homepage

    Lorenzo Alvisi



    Omer Barkol, HP
    The Machine changes … software stacks

    Enterprises expect their IT to do the next step by providing real business value from their “big data”. Current systems’ architecture not be up to the task of handling petabyte scales, at reasonable cost and energy utilization. HP Labs is taking a bold approach by looking at the broad picture with The Machine project. The Machine presents a new computer architecture and is based on changing the way computation, communication, and storage are considered. While presenting such a big shift in hardware, software stacks should be re-thought as well. In this talk I will present The Machine and then review different software related aspects.
    Brief Bio
    Dr. Barkol is acting as a Research Manager at HP Labs, Israel, since April 2011, and as a senior researcher in HP Labs, Israel, since July 2008. During this period he was conducting research in various areas such as formal languages, software automation, graph mining, analytics for knowledge management, and collaboration in the enterprise. As a research manager, the current focus of his team is on big graph analytics and analytics for The Machine. Previously, Dr. Barkol served as a lecturer and teaching assistance in the Computer Science department of the Technion, between 2004 and 2008. Before that, he managed a software development team developing routing protocols in the startup Charlotte’s Web Networks. Dr. Barkol graduated with a B.A. in Mathematics and Computer Science, and later an M.Sc. and a Ph.D. in Computer Science from the Technion – Israel Institute of Technology. 

    Omer Barkol




    Emery Berger, UMass Amherst
    The Rubinger Family Visiting lectureship in TCE​
    Programming Language Technology for the Sciences 
    Over the past 60 years, computer science has developed a vast body of knowledge on designing and implementing abstractions to enhance program efficiency and reduce error. We call this field "Programming Languages." Traditionally, this research has targeted software developers, but these approaches can have far broader applicability. In this talk, I will describe work from my group on using programming language technology to improve the efficiency of scientists and ensure the correctness of their results. 
    This talk addresses two key tools that social scientists depend on: spreadsheets and surveys. Our first system, CheckCell, reveals data and methodological errors in spreadsheets. We show that CheckCell is able to find key flaws in the claims of the economists Reinhart and Rogoff, whose spreadsheet-based analysis erroneously led to austerity budgets across Europe. Our second system, SurveyMan, automatically debugs surveys. I will explain what survey bugs are, how SurveyMan finds them, and describe case studies of using SurveyMan to deploy and debug surveys by linguists and behavioral economists.
    Brief Bio
    Emery Berger is a Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst, the flagship campus of the UMass system. Professor Berger’s research spans programming languages, runtime systems, and operating systems, with a particular focus on reliability, security, and performance. He is the creator of influential software systems including Hoard, a fast and scalable memory manager that accelerates multithreaded applications (used by companies including British Telecom, Cisco, Crédit Suisse, Reuters, SAP, and Tata, and on which the Mac OS X memory manager is based); DieHard, an error-avoiding memory manager that directly influenced the design of the Windows Fault-Tolerant Heap; and DieHarder, a secure memory manager that was an inspiration for hardening changes made to the Windows 8 heap. His honors include a Microsoft Research Fellowship, an NSF CAREER Award, a Lilly Teaching Fellowship, a Most Influential Paper Award at OOPSLA 2012, a Google Research Award, a Microsoft SEIF Award, and several Best Paper Awards; he was named an ACM Senior Member in 2010. He is currently an Associate Editor of the ACM Transactions on Programming Languages and Systems, and will serve as Program Chair for PLDI 2016.

    Homepage

    Emery Berger



    Ran Canetti, Tel Aviv University
    Cryptographic Software Obfuscation and Applications
    Software obfuscation, namely making software unintelligible while preserving its functionality, has long been considered a futile and inherently ineffective concept. Yet, recent results in cryptography have demonstrated that, under appropriate assumptions on the hardness of certain algebraic problems, software can indeed be meaningfully and effectively obfuscated.  This new prospect holds both great promise and great danger  to cyberspace, and thus also to our society at large.  The talk will present the basic ideas behind software obfuscation, recent developments towards more efficient and more secure obfuscation, and a number of salient applications.
    Brief Bio
    Ran Canetti is a professor of Computer Science at Tel Aviv University and Boston University. His research interests center on cryptography and information security, with emphasis on the design, analysis and use of cryptographic protocols. He is the head of the Check Point Institute for Information Security, the head of the scientific committee of the Blavatnik Interdisciplinary Center for Research in Cybersecurity and the head of Boston University's Reliable Information Systems and Cybersecurity Center (on leave).

    Homepage

    Ran Canetti



    Keren Censor-Hillel, Technion
    Are Lock-Free Algorithms Practically Wait-Free?

    I will describe recent research that addresses the gap between the insufficient theoretical progress guarantees of many concurrent implementations and the satisfactory conditions that are typically provided by them in practice. Specifically, while obtaining efficient wait-free algorithms has been a long-time goal for the theory community, most non-blocking commercial code is only lock-free. On our quest to understand why in some cases this might be sufficient, we introduce a new methodology of analyzing algorithms under a stochastic scheduler. Based on joint work with Dan Alistarh and Nir Shavit.
    Brief Bio
    Keren Censor-Hillel is an Assistant Professor at the Department of Computer Science at the Technion. She received her PhD in 2010 from the Technion and was afterwards a Simons Postdoctoral Fellow at MIT. Her main research interests are in theory of computation and focus on distributed computing. Censor-Hillel received a Shalon Career Advancement Award and an Alon Fellowship, as well as additional research and teaching awards.

    Homepage

    Keren Censor-Hillel



    Peter Druschel, MPI-SWS, Germany
    Ensuring compliance in large-scale data systems 

    Web services and enterprise data systems typically store, process, and serve data from many users and sources. Ensuring compliance with all applicable data use policies in a complex and evolving system is a difficult challenge, because any bug, misconfiguration, and operator error can violate policy. The problem is that policy specification and enforcement code are typically intertwined with complex, dynamic, and low-level application code. In this talk, I'll describe Thoth, a distributed compliance layer, which enforces high-level policies attached to data conduits regardless of bugs, misconfigurations, and operator errors in other parts of the system. Declarative policies reflect the data confidentiality, integrity, provenance, and declassification requirements of stakeholders like data sources, service providers, and regulators. Policies may refer to user identity, action, time, system state, and data. Thoth mediates I/O, tracks data flow, and enforces policies at process boundaries. An experimental evaluation based on an early prototype indicates that Thoth can ensure data compliance with modest overhead.
    Brief Bio
    Peter Druschel is the founding director of the Max Planck Institute for Software Systems (MPI-SWS) in Germany. Previously, he was a Professor of Computer Science and Electrical and Computer Engineering at Rice University in Houston, Texas. His research interests include distributed systems, mobile systems, and privacy. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, and the ACM SIGOPS Mark Weiser Award. Peter is a member of Academia Europaea and the German Academy of Sciences Leopoldina.

    Homepage

    Peter Druschel



    Shelly Garion
    Auditing, Security and Data Analytics for Cloud Object Stores  

    The storage industry is going through a big paradigm shift caused by drastic changes in how data is generated and consumed. Cloud object stores, such as OpenStack Swift and Amazon S3, provide massive, on-line storage pools that can be accessed from anywhere and anytime. As cloud object storage vendors try to reduce costs by sharing the resources among several tenants, new types of threats emerge. These include higher risks of data leakage whether it is accidental, or due to a targeted attack. Data store operators generate audit-trail that records all data accesses. Such data can include access to objects (get/put/delete), log-in attempts (including failed attempts), resource usage and more. This data is typically kept for forensic analysis that provides methods to reconstruct past cloud computing events and for investigation of potential security breaches. In this talk we will discuss the usage of big data analytics tools, and Apache Spark in particular, for processing events. Analyzing the events can provide insights on the data and its usage including security risks, future capacity planning and predictive failure analysis. We will describe how real-life scenarios from an operational cloud are analyzed using Apache Spark Map/Reduce and machine learning algorithms. Furthermore, we will describe an audit-trail extension to OpenStack Swift that offers a complete audit-trail of data accesses. The audit-trail can be consumed by an activity monitoring system for generating compliance reports and for defining various policies for controlling data access.
    Brief Bio
    Dr. Shelly Garion is a Research Staff Member in the IBM Cloud Security & Analytics group. 
    She holds a B.Sc. of "Talpiot" program in Mathematics and Physics, M.Sc. and Ph.D. degrees in mathematics, 
    all obtained at the Hebrew University of Jerusalem. She served as a Senior Algorithms Researcher at the IDF (Communication Corps). She was a postdoctoral researcher at several distinguished research institutes and universities around Europe: 
    the Max-Planck-Institute for Mathematics in Bonn (Germany), the Institut des Hautes Etudes Scientifique 
    in Bures-sur-Yvette (France) and at the University of Muenster (Germany). 
    She is currently working on BigData analytics using open-source tools such as Apache Spark, Apache Hadoop and 
    GraphLab (Dato) in the context of OpenStack Swift object storage. 

    shelly



    Patrice Godefroid, Microsoft Research
    The Rubinger Family Visiting lectureship in TCE​
    Automated Software Testing for the 21st Century
    During the last decade, research on automating software testing using program analysis has experienced quite a resurgence. The first part of this talk will present an overview of recent advances on automatic code-driven test generation. This approach to software testing combines techniques from static program analysis (symbolic execution), dynamic analysis (testing and runtime instrumentation), model checking (systematic state-space exploration), and automated constraint solving (SMT solvers). Notably, this approach has been implemented in the Microsoft tool SAGE, which was credited to have found roughly one third of all the security vulnerabilities discovered by file fuzzing during the development of Microsoft's Windows 7. The second part of the talk will discuss current trends in the software industry, and their impact on software testing.
    Brief Bio
    Patrice Godefroid is a Principal Researcher at Microsoft Research. He received a B.S. degree in Electrical Engineering (Computer Science elective) and a Ph.D. degree in Computer Science from the University of Liege, Belgium, in 1989 and 1994 respectively. From 1994 to 2006, he worked at Bell Laboratories (part of Lucent Technologies), where he was promoted to "distinguished member of technical staff" in 2001. His research interests include program (mostly software) specification, analysis, testing and verification.

    Homepage

    Patrice Godefroid



    Martin Odersky, EPFL Switzerland
    Compilers are Databases

    Twenty years ago, compilers translated source programs to machine code. They still do that, but now have to fulfil many additional functions. They are at the core of incremental build tools, they support interactive exploration via REPLs or worksheets, they back navigation and exploration in IDEs, and they provide frameworks for deep code analysis. Compilers have to do all that in the context of rapidly changing program updates. At the limit, key internal data structures need to be updated as fast as a programmer can type.
    These developments suggests a change in viewpoint. Traditionally compilers were modelled as functions from source programs to target programs. I'll argue that we should see them instead as databases that maintain complex views over program structures. Some of these views might be target programs, others might be data structures for consumption by an IDE or a program analysis tool. Since views maintained by a compiler can be expensive to compute, updating them in the face of rapid program changes is hard. In this talk I report our experiences with a compiler architecture that tackles this problem by taking inspiration from functional databases.
    Brief Bio
    Martin Odersky is a professor at EPFL in Lausanne, Switzerland.
    He is best known as the creator and principal designer of the Scala programming language. Prior to that, he made several contributions to the development of Java. He created the Pizza and GJ languages, designed the original version of generics for Java, and wrote the javac reference compiler. More generally, Martin is interested in programing languages and methods, in particular how object-oriented and functional programming can be made to work seamlessly together. He believes is that the two paradigms are two sides of the same coin and should be unified as much as possible. He was named an ACM fellow for his achievements in this area.

    Homepage

    Martin Odersky



    Rotem Oshman, Tel Aviv University
    Information-Theory Lower Bounds in Distributed Computing

    The advent of massively parallel computation on very large data sets ("big data") raises many interesting and fundamental questions in distributed computing; the cost of computation in systems like MapReduce and OpenMPI is largely dominated by communication and synchronization between the machines, so understanding these complexity measures can guide us in the development of better algorithms for parallel and cloud computing. In this talk I will discuss recent applications of tools and ideas from information theory to understand the time and communication complexity of various problems in synchronous parallel computation.
    Brief Bio
    Rotem completed a B.A. and an M.Sc. at the Technion, and obtained her PhD from MIT under the supervision of Prof. Nancy Lynch. She was a post-doctoral fellow at the University of Toronto and Princeton University, and is currently a senior lecturer at the Computer Science Department of Tel Aviv University.

    Homepage

    Rotem Oshman



    Mooly Sagiv, Tel Aviv University
    Verifying Correctness of  Statefull Networks

    Modern computer networks use states to store temporary status of the network. This allows to enforce complicated forwarding policies and to enhance networks in a modular way. However, this can lead to subtle errors and makes network verification hard. I will survey new techniques for dealing with statefull networks. I will also talk about techniques to infer provably correct  network configurations. This is a joint work with Aurojit Panda and Scott Shenker (UCB), Katherina Argyraki (EPFL), Ori Lahav(MPI-SWS), Kalev Aplernas, Alexander Rabinovich, and Yaron Welner.
    Brief Bio
    Mooly Sagiv is Professor of Computer Science at Tel Aviv University.
    His research focuses on program analysis and verification, in particular reasoning about imperative programs manipulating dynamic data structures. His current work includes shape analysis and reasoning about computerd networks. Sagiv is a recipient of a 2013 senior ERC research grant for Verifying and Synthesizing Software Composition. Sagiv was a visiting professor at UC Berkeley and Stanford University in 2010-2011, and was a Postdoc with Tom Reps at The University of Wisconsin in 1994-1995. He spent 3 years at IBM as a researcher after earning his PhD from the Technion Israel in 1991.

    Homepage

    Mooly Sagiv



    Bianca Schroeder, University of Toronto
    Programming paradigms for massively parallel computing: Massively inefficient?

    Abstract: At the core of the "Big Data" revolution lie frameworks that allow for the massively parallel processing of large amounts of data, such as Google's MapReduce or Yahoo!'s Hadoop. The broad goal of our work is to study how well these programming frameworks work in practice. We use a set of job traces from production systems, including data centres at Google and Yahoo!, to study the typical life experience of a job running in such systems. We find that a surprisingly large number of jobs does not complete successfully and show how different job characteristics are correlated with job failure and can be used to predict unsuccessful runs of a job. Finally, we compare the results with those from traditional supercomputing systems.
    Brief Bio
    Bianca is an associate professor and Canada Research Chair in the Computer Science Department at the University of Toronto. Before joining UofT, she spent 2 years as a post-doc at Carnegie Mellon University working with Garth Gibson. She received her doctorate from the Computer Science Department at Carnegie Mellon University under the direction of Mor Harchol-Balter. She is an Alfred P. Sloan Research Fellow, the recipient of the Outstanding Young Canadian Computer Science Prize of the Canadian Association for Computer Science, an Ontario Early Researcher Award, an NSERC Accelerator Award, a two-time winner of the IBM PhD fellowship and her work has won four best paper awards and one best presentation award. She has served on numerous program committees and has co-chaired the TPCs of Usenix FAST'14, ACM Sigmetrics'14 and IEEE NAS'11. Her work on hard drive reliability and her work on DRAM reliability have been featured in articles at a number of news sites, including Computerworld, Wired, Slashdot, PCWorld, StorageMojo and eWEEK.

    Homepage

    Bianca Schroeder



    Mark Silberstein, Technion
    Accelerator-Centric Operating Systems: Rethinking the Role of CPUs in Modern Computers
    Hardware accelerators, like GPUs, storage/network I/O accelerators and media processors, have become the key to achieving performance and power goals in modern computing systems. Unlike CPUs, accelerators continue to enjoy significant growth in power efficiency and performance, while in parallel gaining improved programmability and versatility. However, building efficient systems that realize the potential of accelerator hardware advances today is incredibly difficult. In this talk I will argue that the root cause of this complexity is the growing conceptual gap between the accelerator-reach hardware and CPU-centric software stack, and in particular, the lack of adequate operating system abstractions for programs running natively on accelerators. I will demonstrate the benefits of accelerator-centric OS design on the example of native file system (GPUfs) and networking (GPUnet) layers for massively parallel GPUs. These layers expose well-understood standard I/O abstractions like files and sockets directly to programs running on GPUs, and abstract away the complexity of the underlying heterogeneous hardware while removing the CPU from both control and data path for performance. GPUfs and GPUnet break the constrained accelerator-as-coprocessor model and streamline the development of high-performance, distributed applications like in-GPU-memory MapReduce and a new class of low-latency, high-throughput GPU-native network services.
    Brief Bio
    Mark Silberstein is an assistant professor at the Electrical Engineering Department, Technion. Mark's research is on computer systems with programmable computational accelerators,  operating systems,  and systems security. Mark did his PhD in Computer Science at the Technion, where his work  led to the development of an online distributed system for genetic linkage analysis system, Superlink-online, which today serves geneticists worldwide. Prior to joining the Technion faculty he spent two years as a postdoc at the University of Texas at Austin.

    Homepage

    Mark Silberestein



    Martin Vechev, ETH Switzerland
    The Rubinger Family Visiting lectureship in TCE​
    Machine Learning for Programming
    The increased availability of massive codebases (“Big Code”) creates an exciting opportunity for new kinds of programming tools based on probabilistic models. Enabled by these models, tomorrow’s tools will provide probabilistically likely solutions to programming tasks that are difficult or impossible to solve with traditional techniques. I will present a new approach for building such tools based on structured prediction with graphical models, and in particular, conditional random fields. These are powerful machine learning techniques popular in computer vision – by connecting these techniques to programs, our work enables new applications not previously possible. As an example, I will discuss JSNice (http://jsnice.org), a system that automatically de-minifies JavaScript programs by predicting statistically likely variable names and types. Since its release few months ago, JSNice has become a popular tool in the JavaScript community and is regularly used by thousands of developers worldwide.
    Brief Bio
    Martin Vechev is a tenure-track assistant professor at the Department of Computer Science, ETH Zurich. Previously, he was a Research Staff Member at the IBM T.J. Watson Research Center, New York (2007-2011). He obtained his PhD from Cambridge University in 2008.
    His research interests are in program analysis, program synthesis, application of machine learning to programming languages, and concurrency. 

    Homepage

    Martin Vechev



    Andreas Zeller, Saarland University, Germany
    The Rubinger Family Visiting lectureship in TCE​
    Guarantees from Testing
    Modern test generation techniques allow to generate as many executions as needed; combined with dynamic analysis, they allow for understanding program behavior in situations where static analysis is challenged or impossible.  However, all these dynamic techniques would still suffer from the incompleteness of testing: If some behavior has not been observed so far, there is no guarantee that it may not occur in the future.  In this talk, I introduce a method called Test Complement Exclusion that combines test generation and sandboxing to provide such a guarantee.  Test Complement Exclusion will have significant impact in the security domain, as it effectively detects and protects against unexpected changes of program behavior; however, guarantees would also strengthen findings in dynamic software comprehension.  First experiments on real-world ANDROID programs demonstrate the feasibility of the approach.
    Brief Bio
    Andreas Zeller is a full professor for Software Engineering at Saarland University in Saarbrücken, Germany, since 2001. His research concerns the analysis of large software systems and their development process. In 2010, Zeller was inducted as Fellow of the ACM for his contributions to automated debugging and mining software archives, for which he also was awarded 10-year impact awards from ACM SIGSOFT and ICSE. In 2011, he received an ERC Advanced Grant, Europe's highest and most prestigious individual research grant, for work on specification mining and test case generation. In 2013, Zeller co-founded Testfabrik AG, a start-up on automatic testing of Web applications, where he chairs the supervisory board.

    Homepage

    Andreas Zeller



    Ran Bittmann, SAP
    The cloud as a vehicle for democratization of hybrid computing

    The talk discusses our experience in using hybrid computing with SAP's HANA Cloud platform. This approach democratizes the use of hard-to-implement hardware acceleration such as GPUs and FPGAs by implementing it once in the *aaS platform to the benefit of its users. The talk will discuss the benefit of such an approach and present the results of our experiments with SAP's HANA Cloud platform.
    Brief Bio
    Ran M. Bittmann, is a Researcher at the SAP Innovation Hub Israel focusing on: machine learning, predictive analytics and hardware acceleration of analytics algorithms. Ran has over 35 years of experience in the software industry. Prior to SAP Ran held executive positions in several successful start-up companies in the area of business intelligence, mobile applications and data communication. Ran holds a PhD in Information Systems from the Graduate School of Business Administration at the Bar-Ilan University. 

    Ran Bittmann



    Michael Kagan, Mellanox 
    Enabling the use of data 

    The exponential growth in data and the ever growing demand for higher performance to serve the requirements of the leading scientific, cloud and web 2.0 applications, drive the need for hyper scale system and the ability to connect tens-of-thousands of heterogeneous compute nodes in a very fast and efficient way. The interconnect has become the enabler of data and the enabler of efficient simulations. Beyond throughput and latency, the data center interconnect needs be able to offload the processing units from the communications work in order to deliver the desired efficiency and scalability. 100Gb/s solutions have already been demonstrated, and new hyper scale topologies are being discussed. The session will review the need for speed, new usage models and how the interconnect can play a major role in securing the large amount of data. 
    Brief Bio
    Michael Kagan is a co-founder of Mellanox and has served as CTO since January 2009. Previously, Mr. Kagan served as vice president of architecture from May 1999 to December 2008. From August 1983 to April 1999, Mr. Kagan held a number of architecture and design positions at Intel Corporation. While at Intel Corporation, between March 1993 and June 1996, Mr. Kagan managed Pentium MMX design, and from July 1996 to April 1999, he managed the architecture team of the Basic PC product group. Mr. Kagan holds a Bachelor of Science in Electrical Engineering from the Technion — Israel Institute of Technology.
     

    ????? ÷?? mellanox ?????÷? ??÷??? ????? : ??? ??



    Alex Kesselman, Google
    Building Scalable Cloud Storage

    The enterprise computing landscape has recently undergone a fundamental shift in storage architectures when the central-service architecture has given way to the distributed storage clusters that experience exponential growth. As businesses seek ways to more effectively increase storage efficiency, such clusters built of commodity PCs can deliver high performance, availability and scalability for new data-intensive applications at a fraction of cost compared to monolithic disk arrays. To unlock the full potential of storage clusters, the data is replicated across multiple geographical locations increasing availability and reducing network distance from clients. In the talk we review the scalability challenges and solutions for one of the world’s largest storage platforms, Google Cloud Storage.
    Brief Bio
    Alex Kesselman is a Senior Staff Software Engineer at Google, where his work focuses on building a planet-scale cloud storage infrastructure for unstructured data. Prior to joining Google, he was a Wireless Networking Architect at Intel working on IEEE 802.11n and 802.15.3c standards. He received a PhD degree in Computer Science from Tel Aviv university in 2003, and has over fifty research publications on online, network optimization, and distributed algorithms as well as game theory. Alex has been awarded thirty four US patents.


    Robert O'Callahan, Mozilla Corporation
    The Rubinger Family Visiting lectureship in TCE​
    Taming Nondeterminism
    One challenge of developing and deploying software at scale is diagnosing rare nondeterministic failures that occur during testing and in the field. Much research has been devoted to "record and replay" solutions to this problem, but so far these are little-used in practice. To understand and overcome barriers to deploying these tools, Mozilla has built "rr", a record-and-replay debugger designed for — and actually used by — Firefox developers. I'll give an overview of "rr" and lessons learned from it so far. Then we'll take a step back and explore the world of possibilities opened to us by record-and-replay — including reverse-execution debugging, "omniscient debugging", and parallel dynamic analysis. Finally we'll consider what it would take to record all software execution all the time, and the huge impact that would have on reliability and security.
    Brief Bio
    Robert O'Callahan is a Distinguished Engineer at Mozilla Corporation, focusing on the development of Web standards and their implementation in Firefox, with a particular focus on CSS, graphics and media APIs. He has a side interest in software development research, and debugging in particular.

    rob-ocallahan



    Radia Perlman, EMC Corporation
    Making data be there when you want it and gone when you want it gone 

    This talk describes the design of a system that allows a cloud to make arbitrarily many copies of data, including off-line on, say, tapes, and once the data is supposed to be gone, it will become unrecoverable.  Obviously the answer involves encrypting the data and then throwing away the key, but this doesn't completely solve the problem, since there has to be copies of the keys.   This design teaches lessons about scalability, reliability, and security, in that it demonstrates how an arbitrarily robust system can be built out of many flaky components, and how to design things so that no single organization need be completely trustworthy.
    Brief Bio
    Radia Perlman’s work has had a major impact on how computer networks work today.  Her inventions in network routing makes today’s IP networks more robust, scalable, and self-configuring. Also, her spanning tree algorithm transformed Ethernet from something that could support just a few hundred nodes within a building to something that can support hundreds of thousands of nodes. More recently, she invented TRILL, a technology that removes the data path restrictions in Ethernet so that data can travel over shortest paths, multiple paths, and use traffic engineering.  She has also made major contributions to network security; making networks robust even if some of the components are malicious, DDOS (distributed denial of service) defense, authentication, and authorization. She is the author of “Interconnections:  Bridges, Routers, Switches, and Internetworking Protocols”, and coauthor of “Network Security: Private Communication in a Public World”, both of which are popular textbooks. She holds over 100 issued patents. She has received numerous industry awards including lifetime achievement awards from ACM’s SIGCOMM and Usenix, election to National Academy of Engineering, induction into the Internet Hall of Fame, and an honorary doctorate from KTH (Royal Institute of Technology, Sweden). She has a PhD in computer science from MIT.

    Radia-Perlman



    Wolfgang Roesner, IBM Systems 
    Software Methods meet Large-Scale System-on-a-Chip Design: the Arrival of Aspect-Orient Design 

    As silicon technology scales to sub 20nm feature size,  Systems-on-a-Chip grow so large that new design methodologies are needed.  In particular,  
    functional and physical modularity constraints are increasingly pulled in contradictory directions. A new trend in chip design borrows heavily from the 
    idea of aspect-oriented programming to solve productivity, quality and scalability issues that arise when functional and physical modules become incongruent. This talk will demonstrate the motivation for aspect-oriented design and will show how cross-discipline learning can drive true innovation in System-on-a-Chip Design.  
    Brief Bio
    After getting a Ph.D. in EE at the University of Kaiserslautern, Germany, Wolfgang joined IBM in Boeblingen where he developed the first CMOS synthesizable RTL language and simulation tools. He joined the POWER design team at IBM Austin, TX, US in 1994. He has been the technical lead of the IBM Server verification tools & technology team since 1996. Since 2003 he was verification lead of several microprocessor projects, notably POWER6 and z10 and is now IBM Systems verification methodology lead across all microcprocessors and systems. He was named IBM Fellow in 2011. 


    OLYMPUS DIGITAL CAMERA


    Benny Schnaider, Ravello
    Infrastructure independent application life cycle

    Public clouds are being positioned as the ultimate solution for running IT infrastructure and as a replacement to existing DC (Data Centers) solutions. Unfortunately, the public cloud architecture is very different from most existing DC architectures. Given that to-date, most existing enterprise applications are tightly coupled with their underlying DC architecture, enterprises are having difficulties adopting their applications to the new public cloud architecture.
    One possible approach, to deal with this challenge, is to break the application/infrastructure dependencies and focus on applications life cycle (e.g., development, testing, staging production) that are independent from the infrastructure they are going to run on. 
    Brief Bio
    Benny Schnaider is a high-tech serial entrepreneur. Benny co-founded Ravello Systems in 2011 and serves as its President and Chairman of the board.
    Previously, Benny was the CEO and co-founder of Qumranet whose team developed KVM (the leading open source virtualization solution) and SolidICE, a desktop virtualization solution. Following Qumranet's acquisition in 2008, Benny served as a VP of Business Development for Red Hat.
    Additionally, Benny co-founded and served on the Board of Directors for P-Cube, a developer of IP service control platforms, which was acquired by Cisco in 2004. Benny was also the CEO and Founder of PentaCom Ltd., a provider of networking products implementing Spatial Reuse Protocol (SRP) for IP based metropolitan networks which was acquired by Cisco in 2000. Benny invests and serves as a board member in several startups. Some examples are: Traffix Systems (acquired by F5 in 2012) and B-Hive (acquired by VMware in 2008), Cloudius, Seculert, OptiCul Diagnostics and Colabo. Benny has held senior management, engineering and strategic roles at many Silicon-Valley based companies including Cisco Systems, Amdahl/Fujitsu, Hitachi, IDT, Sun Microsystems and 3Com. Benny holds a Masters degree in Engineering Management from Santa Clara University, and a BSc in Computer Engineering from the Technion (Israel Institute of technology).

    Benny Schnaider



    Ayal Zaks, Intel
    Compiling for Scalable Computing Systems – the Merit of SIMD

    One of the prevalent trends in modern scalable computing systems is the continuous growth in parallelism provided by increasing both coarse-grain parallelism across more independent cores and threads, and by increasing fine-grain parallelism within each thread, including widening its SIMD capabilities. This trend poses a critical challenge for programming – how can modern software make efficient use of these new capabilities? This talk focuses on the latter challenge related to SIMD, which is strongly connected to optimizing compilers. We will discuss recent research and development advancements in coping with this challenge, in terms of innovative compilation technology, programming models, and the interaction between the two.
    Brief Bio
    Ayal Zaks joined Intel's Software & Services Group in Haifa late in 2011 where he manages a compiler development team. Prior to that Ayal spent 15 years at the IBM Haifa Research Laboratory where he worked on compiler optimizations and managed its compiler technologies group. In parallel, Ayal has been active academically, working with research students, serving on program committees, publishing nearly fifty papers and co-organizing international workshops. He received B.Sc., M.Sc., and Ph.D. degrees in Mathematics and Operations Research from Tel Aviv University, and is an adjunct lecturer on Compilation at the Technion. In recent years, he is a parental fan of FIRST robotics competitions.
    Homepage

    Ayal Zaks