From CS and engineering to AI

Cloud (and hardware) native AI


13 x 13

What is artificial intelligence?

Fast tensor arithmetic

Done fast

A bit of TensorFlow: concat

if len(values) == 1:
    with ops.name_scope(name) as scope:
          axis, name="concat_dim",
      return identity(values[0], name=scope)
  return gen_array_ops.concat_v2(values=values, axis=axis, name=name)

The law stops here

Moore's law and beyond

Improvements in

software architecture

hardware architecture

Through the use of free software

And hardware

... will help us build the AI of the future

From computer engineering to AI

Let's start at the edge

NVIDIA Edge Stack is an optimized software stack that includes NVIDIA drivers, a CUDA® Kubernetes plug-in, a CUDA Docker container runtime, CUDA-X libraries, and containerized AI frameworks and applications

Edge computing

Computing below the clouds

Edge ⇒ distributed, close and user-owned

High-performance processing for Internet of Things or anything else

Grove AI HAT

5G merges Internet of Things and edge computing

And fosters AI chips

AIS: AI-in-sensor

And is fostered by free software

... And hardware

Accelerating TensorFlow and Keras

By using open hardware cores

Processing tensors via TPU

Tensor Processing Unit 3.0
By Zinskauf - Own work, CC BY-SA 4.0, Link

Systolic array implementation of the extended QR-RLS algorithm

RISC-V for the win

Yunsup Lee holding RISC V prototype chip

Kendryte K210, an AI accelerator

Or spiking neurons

DARPA SyNAPSE 16 Chip Board.jpg
By DARPA SyNAPSE -, Public Domain, Link

Side effect: less energy consumption

+ Less memory footprint, more speed

GPUs process vectors... fast

As fast as they consume energy

And now VPUs

Convolutions done fast

Field programmable gate arrays

Software-defined, open hardware

More bang for the buck

FPGAs want to be free

Taken from

Castillo, Pedro Angel, et al. "Evolutionary system for prediction and optimization of hardware architecture performance." 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence). IEEE, 2008.
Optimizing through emulation

Know the tools

Understand the concepts

Build your AI from the bottom

From computer science to AI

Let's stop first at the desktop

共に: concurrency: flowing together

Communicating sequential processes

Stateless process writes to/read from channels

Example in Go: KarvPrime/NeuroNet

	for ; activeWorkers > 0; activeWorkers-- {
				id := <-core.outputChannel
				if mode == "train" {
					for index, element := range core.networks[id].GetState() {
						core.connections[index].AddWeighedDiff(core.state[index].GetState(), element.GetState(), activeWorkers)

Cloud computing ⇒ Working with virtualized resources

Virtual machines, storage, data stores, message queues, logging, networks, data analysis, identity management...

The current technology for designing, building, testing and deploying applications

Mainframes → Desktop → Servers → Cloud

Artificial intelligence needs to change with that.

Everything starts with git

Containers isolate resources

Describe once, deploy everywhere

Keep Keras for tomorrow

New/old languages on the block

Go for insfrastructure

Full-stack Javascript

Key: Infrastructure as code

From the simple...

az group create -l westeurope -n CCGroupEU
az vm create -g CCGroupEU -n bobot --image UbuntuLTS

... to the slightly more complex ...

    "$schema": "",
    "contentVersion": "",
    "parameters": {
        "location": { "value": "westeurope"  },
        "accountType": { "value": " Standard_LRS"  },
        "kind": { "value": "StorageV2" },
        "accessTier": { "value": "Cool"   },
        "supportsHttpsTrafficOnly": { "value": true   }

... through the more abstract ..

Vagrant.configure("2") do |config|
  config.vm.define 'public' do |public| = "debian/stretch64" "private_network", ip: ""
  config.vm.define 'db' do |db| = "fnando/dev-xenial64" "private_network", ip: ""

... to the nuts and bolts ...

- hosts: "{{target}}"
  sudo: yes
    - name: install prerrequisites
      command: apt-get update -y && apt-get upgrade -y
      command: apt-get install aptitude python-apt -y
    - name: install packages
      apt: pkg={{ item}}
        - git 
        - curl 
        - build-essential 
        - libssl-dev
        - nodejs
        - npm
    - name: Create links
      command: ln -s /usr/bin/nodejs /usr/bin/node
      ignore_errors: yes
    - name: Create profile
copy: content="export PAPERTRAIL_PORT={{PAPERTRAIL_PORT}}; export PAPERTRAIL_HOST={{PAPERTRAIL_HOST}}" dest=/home/cloudy/.profile

... to the complex

FROM rabbitmq:latest
LABEL version="0.1" maintainer=''
RUN apt-get update && apt-get upgrade -y && apt-get install -y python3 python3-pip 
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1\
    && update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1
ADD requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
WORKDIR /home/app
ADD ./
RUN mkdir data
ADD data/cursos.json data/cursos.json
CMD ./ && celery -A PlatziTareas worker --loglevel=info & ./

Containerizing Spark

RUN apk add --no-cache curl bash openjdk8-jre python3 py-pip nss \
      && chmod +x *.sh \
      && wget${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz \
      && tar -xvzf spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz \
      && mv spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark \
&& rm spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz 

AI as a service

const uriBase =
    const imageUrl =
    const params = "?returnFaceAttributes=age,gender,headPose,smile,facialHair," +
    const uri = uriBase + params
    const imageUrlEnc = "{\"url\":\"" + imageUrl + "\"}"
// .. later 
    req.Header.Add("Content-Type", "application/json") 
    req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
    resp, err := client.Do(req)

Discrete releases ⇒ Continuous integration/deployments

Idempotent deployments

Avoiding works-for-me-ism

  - linux
  - docker
  - util/
  - util/
        - env: BUILDENV=whateverable
        - env: BUILDENV=docker

DevOps ⇒ Development + QA + Operations

We need AIOps

For reproductible, reliable, AI deployments.

Orchestration of resources

Kubernetes definition

apiVersion: v1
kind: Pod
  name: twocontainers
  - name: sise
    image: mhausenblas/simpleservice:0.5.0
    - containerPort: 9876
  - name: shell
    image: centos:7
      - "bin/bash"
      - "-c"
- "sleep 10000"

To create event-based architectures

Via distributed configuration

To create a service mesh

That includes diverse levels of virtualization

... that includes serverless computing

module.exports.hello = async (event, context) => {
  return {
    statusCode: 200,
    body: JSON.stringify({
      message: 'Expecto Petronum',
      input: event,

Traditional architectures are monolitic

Modern, cloud native architectures are distributed

AI computing architectures need to be on the cloud

And based on microservices

Processing streams

Kappa and other architectures

Kappa architecture

Image by Diddharth Mittal

Your next applications will be born in the cloud

And we need to change our AI practice methods to reflect that

AI research is computer science

Familiarity with best practices is essential

Need to know: Concurrency, cloud native, DevOps