The Brandeis GPS blog

Insights on online learning, tips for finding balance, and news and updates from Brandeis GPS

Tag: artificial intelligence

Brandeis graduate student publishes new book on AI and Robotics

We are excited to announce that Brandeis Project and Program Management student, Francis Govers, recently published a book, Artificial Intelligence for Robotics. Govers provided us with the following description:

Artificial Intelligence for Robotics starts with an introduction to Robot Operating Systems (ROS), Python, robotic fundamentals, and the software and tools that are required to start out with robotics. You will learn robotics concepts that will be useful for making decisions, along with basic navigation skills.

As you make your way through the chapters, you will learn about object recognition and genetic algorithms, which will teach your robot to identify and pick up an irregular object. With plenty of use cases throughout, you will explore natural language processing (NLP) and machine learning techniques to further enhance your robot. In the concluding chapters, you will learn about path planning and goal-oriented programming, which will help your robot prioritize tasks.

By the end of this book, you will have learned to give your robot an artificial personality using simulated intelligence.

What you will learn

  • Get started with robotics and artificial intelligence
  • Apply simulation techniques to give your robot an artificial personality
  • Understand object recognition using neural networks and supervised learning techniques
  • Pick up objects using genetic algorithms for manipulation
  • Teach your robot to listen using NLP via an expert system
  • Use machine learning and computer vision to teach your robot how to avoid obstacles
  • Understand path planning, decision trees, and search algorithms in order to enhance your robot

Francis Govers’s paperback and e-book can be found on Amazon here.

For software engineers seeking to develop an advanced set of robotics technology skills, Brandeis GPS offers an MS in Robotic Software Engineering. For more information about the part-time, fully online program, contact the  GPS office: 781-736-8787, gps@brandeis.edu, or submit your information.

What’s next for EdTech

Education technology is constantly evolving alongside the development of new tools, processes and resources. Each year, an expert panel of community members publishes the NMC Horizon Report, which lays out the latest trends and developments in EdTech and identifies new impacts on learning, teaching, and creative inquiry.

This year’s key findings include:

  • In the short-term, a growing focus on measuring learning and new learning spaces;
  • In the mid-term, an increase in open educational resources and the rise of different forms of interdisciplinary studies; and
  • In the long-term, advancing cultures of innovation and cross-institution and cross-sector collaboration

The report predicts that analytics technologies and makerspaces will likely influence EdTech in 2019. Within the next five years and beyond, educators can expect to see the adoption of more adaptive learning technologies and artificial intelligence, mixed reality and robotics.

Be at the forefront of EdTech

Brandeis University is proud to offer master’s degrees for practitioners seeking to make an impact on the future of education technology:

MS in Instructional Design and Technology

MS in Strategic Analytics

MS in Robotic Software Engineering

Brandeis GPS programs are part-time, and 100% online. To learn more about our master’s degrees, request more information or contact the GPS office: 781-736-8787, gps@brandeis.edu.

The Top 5 Robotics Trends You’ll See in 2018

Robotics technology has proven to evolve at a rapid pace. In 2015, Uber began testing the first of its self-driving cars, and in 2016 it launched 16 self-driving SUVs in San Francisco. With the innovations of today providing just a small glimpse into future advancements, the robotics industry eagerly has its sight set on 2018. As we roll into the new year, we’ve got our eye on five particular trends that we think could characterize the next robotics wave.

Continue reading

Standing At The Mean

Sam Halperin  is currently a Programming Instructor at Thinkful. He is a 2011 graduate of Brandeis Graduate Professional Studies Master of Science in Software Engineering. He is working on a doctorate in Computer Science, and also blogs at www.samhalperin.com

Experimentation enabled by advances in low-cost consumer virtual reality hardware and software.

A few months ago, after a long hacking session with a genetic algorithm (an algorithm that evolves a solution from “chromosomes” over time),Pic1 Unity Game Engine (a 3D video game engine) and an Oculus Rift immersive display, I had what I think is a unique experience:   Creating a data set with the GA, writing a renderer that transformed the data into geometry, hues and color values, and piping the output to a head mounted display, I was able to don the goggles and somewhat literally walk around and stand at the mean of the data set and look around.  For me, this view into the data was a transformative personal experience, if not a scientifically valid approach to understanding data.

Weeks later a second experiment emerged, this time using sensor data attached to a stationary bicycle to drive the view-camera in a virtual environment.   This apparatus had been part of a somewhat Quixotic quest for a virtual reality based active gaming Sampost2experience.  Once implemented, it represented the faintest surface scratch into the vast requirements of art, engineering, sound, theatre and animation that actually make up a production game, but a uniquely satisfying experiment.

The most recent experiment in this set leveraged design training and demonstrated the architectural visualization pipeline from consumer-grade modeller (SketchUp) to virtual reality experience.  This product, like the other two, was also the “first 20%” of effort, (see The Pareto Principle), but uniquely satisfying. The video from the work has been retweeted many times and had over 1800 views since it has been up, and I have received numerous requests for collaboration on similar projects. (http://youtu.be/mJLK_t0bTYA)

Clearly there is a growing mass movement representing a desire for this type of virtual reality technology.  The defining factor in my experience thougsampic3h, as differs from virtual reality experimentation in the past, was that this work didn’t require access to a university
lab, defense contractor or space agency. This access is possible due to a sea change in VR technology driven by the release of the Oculus Rift Head Mounted Display.

Beginning with the release of the Oculus Rift, and followed closely by other projects, VR technology is beginning to permeate as a consumer level technology.  My bike-vr project is actually one of a few similar experiments documented in the various online communities surrounding the technology.  There is a growing community of VR hackers (perhaps a better term is maker) throughout the world, and the level of experimentation has grown exponentially.

My involvement in this work is only beginning, but I am tremendously optimistic that the technology itself represents a positive force for our ability to visualize problems, to communicate with each other, and to be present in environments that we wouldn’t normally be able to experience — across history, geography, scale and any other limits.

Question: What is the value of “being present” and experiencing virtual environments in this way?  What is the value of “standing at the mean”, and how does it differ from viewing a place, a time or a dataset on a traditional computer monitor?  What are the drawbacks?

Answer: The experience of presence with this type of display is so powerful that it can actually make the viewer nauseous, experiencing a sort of simulator sickness approaching seasickness.   At the same time, intelligently engineered virtual environments, built with this in mind can fool the brain in a more positive direction, producing joy, fright, sadness, even the perception of temperature changes.  This is not an experience that is common to interaction with a smartphone or tablet.

Current VR work of interest is quite vibrant and diverse, spanning topics such as “redirected walking” techniques for navigating large virtual environments by walking around small laboratories[1], the study of “oculesics”, where eye movements are tracked and communicated across networks to enhance communication[2], and the exploration of very large datasets using large laboratory installations ringed by huge arrays of displays[3].

See Also

  • [1] Suma, E. A., Bruder, G., Steinicke, F., Krum, D. M., & Bolas, M. (2012). A taxonomy for deploying redirection techniques in immersive virtual environments. Virtual Reality Short Papers and Posters (VRW), 2012 IEEE, 43–46. doi:10.1109/VR.2012.6180877
  • [2] Steptoe, W., Wolff, R., Murgia, A., Guimaraes, E., Rae, J., Sharkey, P., … & Steed, A. (2008, November). Eye-tracking for avatar eye-gaze and interactional analysis in immersive collaborative virtual environments. In Proceedings of the 2008 ACM conference on Computer supported cooperative work (pp. 197-200). ACM.
  • [3] Petkov, K., Papadopoulos, C., & Kaufman, A. E. (2013). Visual exploration of the infinite canvas. Virtual Reality (VR), 2013 IEEE, 11–14. doi:10.1109/VR.2013.6549349

Click here to subscribe to our blog!

Footerindesign

Protected by Akismet
Blog with WordPress

Welcome Guest | Login (Brandeis Members Only)