Top Democrats in Congress are concerned the actions of Li “Cindy” Yang could allow “adversary governments” “to acquire potential material for blackmail or other even more nefarious purposes.” Read More
The 24-year-old man charged with killing the reputed boss of the Gambino crime family wrote a string of slogans on his hand, including “MAGA Forever,” and flashed them before a court hearing Monday. Read More
Today at Nvidia GTC 2019, the company unveiled a stunning image creator. Using generative adversarial networks, users of the software are with just a few clicks able to sketch images that are nearly photorealistic. The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age.
Called GauGAN, the software is just a demonstration of what’s possible with Nvidia’s neural network platforms. It’s designed to compile an image how a human would paint, with the goal being to take a sketch and turn it into a photorealistic photo in seconds. In an early demo, it seems to work as advertised.
GauGAN has three tools: a paint bucket, pen and pencil. At the bottom of the screen is a series of objects. Select the cloud object and draw a line with the pencil, and the software will produce a wisp of photorealistic clouds. But these are not image stamps. GauGAN produces results unique to the input. Draw a circle and fill it with the paint bucket and the software will make puffy summer clouds.
Users can use the input tools to draw the shape of a tree and it will produce a tree. Draw a straight line and it will produce a bare trunk. Draw a bulb at the top and the software will fill it in with leaves producing a full tree.
GauGAN is also multimodal. If two users create the same sketch with the same settings, random numbers built into the project ensure that software creates different results.
In order to have real-time results, GauGAN has to run on a Tensor computing platform. Nvidia demonstrated this software on an RDX Titan GPU platform, which allowed it to produce results in real time. The operator of the demo was able to draw a line and the software instantly produced results. However, Bryan Catanzaro, VP of Applied Deep Learning Research, stated that with some modifications, GauGAN can run on nearly any platform, including CPUs, though the results might take a few seconds to display.
In the demo, the boundaries between objects are not perfect and the team behind the project states it will improve. There is a slight line where two objects touch. Nvidia calls the results photorealistic, but under scrutiny, it doesn’t stand up. Neural networks currently have an issue on objects it was trained on and what the neural network is trained to do. This project hopes to decrease that gap.
Nvidia turned to 1 million images on Flickr to train the neural network. Most came from Flickr’s Creative Commons, and Catanzaro said the company only uses images with permission. The company says this program can synthesize hundreds of thousands of objects and their relation to other objects in the real world. In GauGAN, change the season and the leaves will disappear from the branches. Or if there’s a pond in front of a tree, the tree will be reflected in the water.
Nvidia will release the white paper today. Catanzaro noted that it was previously accepted to CVPR 2019.
Catanzaro hopes this software will be available on Nvidia’s new AI Playground, but says there is a bit of work the company needs to do in order to make that happen. He sees tools like this being used in video games to create more immersive environments, but notes Nvidia does not directly build software to do so.
It’s easy to bemoan the ease with which this software could be used to produce inauthentic images for nefarious purposes. And Catanzaro agrees this is an important topic, noting that it’s bigger than one project and company. “We care about this a lot because we want to make the world a better place,” he said, adding that this is a trust issue instead of a technology issue and that we, as a society, must deal with it as such.
Even in this limited demo, it’s clear that software built around these abilities would appeal to everyone from a video game designer to architects to casual gamers. The company does not have any plans to release it commercially, but could soon release a public trial to let anyone use the software.
NVIDIA’s new driver update adds DXR to the GTX 1060 6GB and up, allowing RTX graphics to run on Pascal and 16-series GPUs. Ad: Buy Gigabyte’s Z390 Master motherboard (http://geni.us/eIQnI on Amazon)
GTC 2019 just started in San Jose, and with it comes news from NVIDIA about GeForce driver updates, GauGAN software for AI-trained art, and other server-focused updates. Today, we’re primarily focused on talking about the GTX updates adding DXR features.
Today, Nvidia released their next generation of small but powerful modules for embedded AI. It’s the Nvidia Jetson Nano, and it’s smaller, cheaper, and more maker-friendly than anything they’ve put out before.
The Jetson Nano follows the Jetson TX1, the TX2, and the Jetson AGX Xavier, all very capable platforms, but just out of reach in both physical size, price, and the cost of implementation for many product designers and nearly all hobbyist embedded enthusiasts.
The Nvidia Jetson Nano Developers Kit clocks in at $99 USD, available right now, while the production ready module will be available in June for $129. It’s the size of a stick of laptop RAM, and it only needs five Watts. Let’s take a closer look with a hands-on review of the hardware.
The Nvidia Jetson is something we’ve seen before, first in 2015 as the Jetson TX1 and again in 2017 as the Jetson TX2. Both of these modules were designed as a platform for ‘AI at the edge’, an idea that’s full of buzzwords, but does make sense from an embedded development standpoint. The idea behind this ‘edge’ is to build and train all your models on racks of GPU, then bring that model over to a small computer for the inference. This small computer doesn’t need to be connected to the Internet, and the power budget doesn’t need to be huge.
This ‘AI at the edge’ paradigm isn’t new — we had dedicated AI chips in the 1980s, even if the Internet of Things hadn’t been invented yet — and Google recently released Coral, a board loaded up with an Edge TPU platform and custom ASIC that has the same form factor and power budget as a Raspberry Pi. Intel has a Neural Compute Stick that’s designed to plug into a single board computer. Again, this is proof, rendered in silicon, we are in the second AI renaissance. The Jetson Nano is the latest board that fits into this market, and its main features are its portability and as a module that can go into your product.
Here are the Nvidia spec sheets published along with this launch:
Specs and Hands-on
The Jetson Nano comes with a quad-core ARM A57 CPU running at 1.4GHz, and since this is Nvidia, you’ve also got a Maxwell GPU with 128 CUDA cores. Memory is 4 GB of LPDDR4, there is support for Ethernet and MIPI CSI cameras.
There are two versions of the Jetson Nano, and two options for storage. The Developer’s Kit, including a carrier board with DisplayPort, HDMI, four USB ports, a CSI camera connector, Ethernet, an M.2 WiFi card slot, and a bunch of GPIOs, uses an SD card. The Developer’s kit costs $99. The Jetson Nano — not the developer’s kit — comes without a carrier board (and would require building your own carrier board) but includes 16 GB of eMMC Flash. Yes, they could have made this naming scheme easier.
These specs put the Jetson Nano in a class a bit above the Raspberry Pi 3, which is to be expected because it costs more. The review unit I was sent runs a standard Ubuntu desktop at a peppy pace, and with Internet this is a computer that performs as you would expect.
But the Jetson isn’t designed to run a GameCube emulator, even though it probably could (and someone should). The entire point of the Jetson Nano is to do inference. You train your model on a big computer, or use one of the many models available for free, and you run them on the Jetson. Here, the Jetson Nano is vastly more capable than the Google / Coral dev board recently launched, or a Pi with an Intel compute stick. You also have a CUDA GPU, and support for, ‘all the popular frameworks’ of deep learning software.
A Robot, For The Makers
With the release of the Jetson Nano, Nvidia is focusing on the maker market, whatever that may be, and they’re releasing tutorials, examples, documentation, and the killer app of all single board computers, a 3D printed robot. The bill of materials for the Jetbot includes a 3D printed robot chassis, a Raspberry Pi camera, a WiFi card (an Intel Wireless-AC 8265), gear motors, a motor driver, and a 10000 mAh USB power bank. Add in a few screws, and you have a functioning Linux 3D printed robot.
Writing code for the Jetbot first requires connecting it to the local WiFi network, but after that it’s as simple as pointing Chrome to an IP address and opening up a Jupyter notepad. All the code in the examples is in Python, and in minutes, a kid can can drive a car around with an Xbox controller, with live video coming back to the browser. This is what robotics education is.
The tutorials and examples for the Jetson Nano include basic teleoperation, continuing to collision avoidance by training a neural net to object following. This is a setup that gives you a capable machine learning platform and a two-wheeled robot chassis. Can you use this to solve a Rubik’s cube? Yes, if you build the software. You can build a self-driving car, and it gets kids interested in STEM.
There’s a learned lesson at work here. Providing examples that can get users up and running fast was the missing element that doomed Intel’s Maker Movement efforts, and it is the key to the success of the Raspberry Pi and Arduino. If you don’t produce documentation and examples, the product will fail. Here, Nvidia has done a remarkable job bringing a GPU on a small Linux module to educators and students.
The Future of the Jetson Platform
This is not Nvidia’s first Jetson product. In 2015, Nvidia released the Jetson TX1, a credit-card sized brick of a module that was intended to be the future of AI ‘at the edge’. Despite the buzzwords, this is a viable use-case for very fast embedded processors; you can train your model on all the GPUs Amazon owns, then put your model on a small embedded device. It’s AI without the cloud or a connection to the Internet. The Jetson TX2 followed in 2017, again with a credit card-sized brick of a module attached to a MiniITX motherboard. This was the module you wanted for selfie-snapping drones, or machine learning for a self-driving car.
The Nvidia Jetson Nano is a break from previous form factors. The Nano, like the Raspberry Pi Compute Module, fits entirely on a standard laptop SO-DIMM connector. This is a module designed for the same application as the Pi Compute Module; you need to build a carrier board that handles all your I/O, and this tiny little module will handle all your data. It’s a viable engineering strategy, even if it’s not for people who want to stuff modern electronics in an old Game Boy shell and run emulators. People do real work with electronics, you know.
While the Jetson Nano is the first in its form factor, there are some hints that Nvidia is going to be developing this platform over the long term. One of the biggest engineering constraints on a project at this scale is the power budget, and the Jetson Nano ships with a carrier board that includes a micro USB power input (there is an additional barrel jack adapter, rated at 5V, 4A). This micro USB power input is limited to 5V, 2A, or 10 Watts. There’s only so much computing you can do per Watt, and if you want more, you have two options: use a smaller process for your silicon or use more power. Nvidia has come up with an ingenious way to save some engineering time on the next version of their carrier board. They’re stacking footprints for USB connectors. The carrier board also supports a USB-C connector:
Sure, it’s just a tiny detail that would go unnoticed by most, but the carrier board is already designed for a USB-C connector and the increased power that can deliver. Nvidia is clearly planning for a future with the Jetson, and with a significantly more convenient form factor we might just see it.
The team acknowledged that this isn’t great news for Indiegogo backers who’ve already been waiting several months for the VCS, but maintained that it should lead to “better overall performance” and a “cooler and quieter” machine without significant disruptions to manufacturing.
While this suggests you won’t be complaining too loudly about performance, it could still leave you frustrated. The team first vowed to ship a system in 2018, but it won’t show up until a year later. And however capable the hardware might be, the VCS will still depend heavily on software support. Developers will have to produce compelling titles optimized for the VCS, and you won’t know how that shakes out until sometime in the months ahead — assuming there isn’t another delay.