Re-framing problems changes how we see and solve them. The intersection of scientific thought and principles parallels much of what we solve as engineers of information (e.g. uncertainty, time, distribution) and need. This talk is an interdisciplinary look at complex adaptive systems and how they innately solve things like resource distribution, growth and rebalancing. From the context of intelligence and systems, this talk will look at ideas around entropy and time, ensemble forecasting, self-organization theory, the butterfly effect, virus-human co-evolution and adaption, natural feedback loops, self-balancing, and adaptation.
Purely functional Scala code needs something like Haskell's IO monad—a construct that allows functional programs to interact with external, effectful systems in a referentially transparent way. To date, most effect systems for Scala have fallen into one of two categories: pure, but slow or inexpressive; or fast and expressive, but impure and unprincipled. In this talk, John A. De Goes, the architect of Scalaz 8’s new effect system, introduces a novel solution that’s up to 100x faster than Future and Cats Effect, in a principled, modular design that ships with all the powerful primitives necessary for building complex, real-world, high-performance, concurrent functional programs.
Thanks to built-in concurrency, high performance, lawful semantics, and rich expressivity, Scalaz 8's effect system may just be the effect system to attract the mainstream Scala developers who aren't familiar with functional programming.
The speaker will provide a primer on Deep Learning. The following topics will be covered:
1) What is deep learning?
2) What is deep learning capable of and what are the limits of deep learning in terms of technological advancement?
3) How is deep learning related to machine learning and artificial intelligence?
4) How did deep learning originate and progress to it's present state?
5) How does deep learning work?
6) How can deep learning lead to the automation of intellectual and non-intellectual tasks and processes?
7) What are some barriers to entry and how can these barriers to entry be overcome?
Handling real-time data has become a critical capability for data-driven organizations. However, today’s reality is a
disconnected patchwork of incomplete technologies that make it a struggle to deliver real-time solutions because of
frustrating complexity, inefficiency, and incompleteness. In this talk, we address these challenges with an unified solution for real-time data. An end-to-end real time system needs
• Messaging: receive and distribute streaming data with support for publish-subscribe and queuing scenarios with built-in durability, scalability, and performance using the Apache Pulsar (incubating) messaging solution.
• Processing: process data transformations and analytics with the Heron real-time processing engine, built for performance and scalability.
• Storage: leverage Apache BookKeeper streaming log storage to ensure durability, resiliency, and performance for streaming data.
In our talk, we will provide an overview of the underlying three systems and how they are used the core for a unified end-to-end real time solution.
Functional programming finds its roots in mathematics - the pursuit of purity and completeness. We functional programmers look to formalize system behaviors in an algebraic and total manner. Despite this, when it comes time to deploy ones beautiful monadic ivory towers to production, most organizations cast caution to the wind and use a myriad of bash scripts and sticky tape to get the job done. In this talk, the speaker will introduce you to Nelson, an open-source project from Verizon that looks to provide rigor to your large distributed system, whilst offering best-in-class security, runtime traffic shifting and a fully immutable approach to application lifecycle. Nelson itself is entirely composed of free algebras and coproducts, and the speaker will show not only how this has enabled development, but also how it provided a frame with which to reason about solutions to fundamental operational problems.
What if you had to build more machine learnt models than there are data scientists in the world? At enterprise companies like Salesforce, customer data comes in vastly different shapes and forms, making it impossible to build one catch-all model even when focusing on a single problem. Instead, it becomes necessary to build thousands of personalized, per-customer models for any single data-driven application. At Salesforce, we have built solutions to these problems into a project called Optimus Prime which we are using to develop robust, production-quality machine learning applications much more quickly than using Spark alone.
In this talk, we will demonstrate two applications of this platform. The first is AutoML which enables building simple yet powerful models for any use case even without having any background in data science. We will describe the underlying challenges of automating machine learning ranging from the user interface to data extraction and model building, touching more deeply on how we automate feature selection and model selection. The result is a system where users only need domain expertise to build production-ready machine learning applications.
The second demonstration will be of a data product more finely tuned to a specific application. We will demonstrate a product currently in development, Case Classification - automatic classification of service cases. This application is built to not only train and predict on each customer’s individual data, but it is also able to scale the ML pipeline dynamically to accommodate any number of prediction fields; it is multi-tenant, multi-label, multi-model, multi-class predictions. We’ll contrast our implementation using Optimus Prime against one in pure Spark and then show the resulting pipeline performance on real customer data.
DeepLearning4J (Deep Learning for Java - DL4J, inception 2013) was specifically designed with Enterprise and Production in mind, as a first-class citizen to the JVM. Skymind develops and maintains the complete DL4J stack and the abstraction for Scala (ScalNet) with a focal point on scalability and vendor integrations.
This session will focus on the challenges in migrating a research prototype to a more production ready system within the JVM. Specifically, migrating/importing an alternative Deep Learning Framework based on python bindings (e.g. Keras via Tensorflow) to DL4J/ScalNet within a distributed environment using Apache Spark.
A walkthrough of a temporal IoT use case modeling an LSTM Network demonstrating the different phases of a project will be shown. Furthermore, the different workflow capabilities in crossing the language boundaries.
This work arises from real-world experience working with Slick and legacy database schemas. The feelings of safety of having a database schema described in Scala are sometimes dwarfed by the pain of actually having to write and maintain the Scala mappings. This becomes obvious when building a new project with an existing database.
The code generation component is bundled with Slick. However, the documentation and default implementation show only the most trivial use-case and require a more complecated build. With every new project we have updated and expanded on the use-cases the plugin covers. Now, we would like to share what we have built and learned.