Ai, Artificial Intelligence, Computer Science, diversity, Education, Engineering, Innovation, leadership, Robotics, STEM, Technology

The Spelman College SpelBots: Remembering Black Women AI and Robotics Pioneers at 20 Years

Think about what strides we’ve made in artificial intelligence in the last 20 years. Driverless cars, ChatGPT, Siri, Deep Learning, and the list goes on. One little-known fact is that a group of Spelman College black women were doing things with artificial intelligence 20 years ago that, even today, only a small fraction of people could do or even think of doing. How do you program four-legged and two-legged humanoid robots to autonomously play soccer using computer vision, machine learning, and localization without the robot being remote controlled? How do you combine AI and robotics with the arts and healthcare to inspire the next generation of computer scientists and engineers around the country?

With the support of the Spelman president, Beverly Daniel Tatum, our science dean, Dr. Lily McNair, Spelman alumni and board members, and our sponsors such as the National Science Foundation, NASA, Coca Cola, Boeing, GM, GE, Apple and Google, the Spelman SpelBots traveled the world to compete in autonomous quadruped and humanoid soccer in Japan, Germany, Italy, and the U.S against some of the leading universities in the world that were doing research in AI and robotics two decades ago.

It was around March of 2004, when I went to Spelman College to interview for a position as an assistant professor in computer and information sciences. I was already an assistant professor at the University of Iowa in electrical and computer engineering but had read the book, The Purpose Driven Life, with my wife and decided that part of my God-given purpose was to help African American students succeed academically, vocationally and spiritually. As I shared in my book, Out of the Box: Building Robots, Transforming Lives, I interviewed at Spelman, Howard University, and had an informal interview at Morehouse.

When Spelman made me an offer, I wasn’t sure what I should do. But my wife said to me, “Andrew, don’t you want our daughters to have professors that really want to see them succeed.” And I decided to go even though many didn’t understand why I would leave a Big 10 research university for a small, historically black undergraduate liberal arts college for women. I would go there and see unlimited possibilities for my students and treat them as I would want a professor to treat and believe in my own daughters.

There are many stories I can tell about our experiences. My book just touches on the first team. When it was released I was working at Apple while on sabbatical from Spelman at the behest of the co-founder of Apple, Steve Jobs. He wanted me to help them hire more black engineers and I was able to do that. More importantly, these young women were pioneering AI and robotics role models, along with my colleague, Dr. Ayanna Howard, who mentored one of our first SpelBots students, Aryen, before the team was formed at NASA’s Jet Propulsion Lab. Aryen contacted me while I was moving to Spelman and I asked her if she wanted to start a RoboCup robotics team. She volunteered to be our first co-captain, along with a student named Brandy.

The AI and robotics topics that the undergraduate Spelman students had to grapple with to compete with graduate students at Georgia Tech, Carnegie Mellon, University of Texas Austin, and other international teams were daunting. The AI and robotics topics they worked on included localization, computer vision, motion, locomotion, teamwork, and decision making. I purposely made sure that I didn’t put limits on them and believed in them. There were several teams of SpelBots who did amazing things. Like the team that tied in a RoboCup Japan Open autonomous humanoid robotics championship match with Japan’s Fukuoka Institute of Technology’s team led by co-captains Jonecia and Jazmine. (There are so many more SpelBots students I can name here.)

So here’s to remembering the women of the Spelman College SpelBots RoboCup robotics team that competed in the U.S., Europe and Asia. Watch this video made by the National Science Foundation, and remember to help all of our young women dream big and pioneer in Science, Technology, Engineering, and Math (STEM).

Picture: Spelman College SpelBots students Jonecia, Jazmine, Ariel, Naquasha, and Dr. Andrew B. Williams at RoboCup 2009 Japan Open in Osaka, Japan Photo credit © Adrianna Williams, used with permission

© 2024 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Education, Engineering, Innovation, Technology

Apple Vision Pro: Initial Impressions from an Engineering Educator

At our recent KEEN National Conference, I often got asked as I wore the new Apple Vision Pro, what are you looking at or what are you doing? We take for granted at a conference or meeting, people are looking at their smartphones or laptops, while also participating in the conference. Sometimes that’s what I was doing. When I spoke to a room full of Engineering Dean’s, I was using the Apple Vision Pro (AVP) to also look at my notes so I didn’t miss anything I wanted to say. Before I talk more about how it might transform engineering education, I’ll let you know what it was like using it. I wasn’t just using it to multitask. I wanted to ask the question: Can the Apple Vision Pro make me more productive than a desktop or laptop? To me the AVP obviously is a superior media consumption device because you can use it for augmented reality or virtual reality. But I wanted to see if it can be a superior “producer” device. Can I generate content and get work done more easily and faster? Let me share with you what it was like to wear it, and some surprises and delights I experienced.

Participating in a discussion at the Kern Entrepreneurial Engineering Network (KEEN) National Conference in Austin, Texas. Photo credit: Kurt Patterson, ASU

Wearing the Apple Vision Pro on the Plane

I have since taken a few trips on a plane with the AVP. One problem I had was putting it in Travel Mode on the plane when it was dark. The AVP has all kinds of sensors to track your eye, limb, and body motion but normally it assumes you are stationary. When you are on a plane, it has to know that you are moving in order to accurately track you. The plane had taken off and once we reached cruising altitude, I took it out of my bag to use it. But I had to enter my password first. Since I wasn’t in Travel Mode, my eyes had a hard time looking at the numbers accurately. I got a pop up message saying that I needed to put it in tracking mode, but no matter how hard I tried to look at the “button” to put it in tracking mode, I couldn’t do it. Not sure if it was because the plane was dark but I tried to no avail. I also couldn’t accurately put in the passcode so it said try again in a minute. I propose a simple solution: have a button sequence that I can press manually to put me in Travel Mode. The next time I traveled , I set the Travel Mode while in the airport lobby but left the battery connected. Not the best way to do things , but I didn’t want to be on the plane again and not be able to use my AVP.

Using the AVP on a plane is delightful. I was able to connect to American Airlines and Delta‘s Wi-Fi easily and watch movies and TV or listen to music. I enjoy how you can turn on immersive environment so that you are surrounded by a starry sky or mountains. At the same time, as the flight attendant approached me, I could see them and request soda and chips. On one trip I forgot to bring my AirPods, but the sound from the AVP itself was good enough for me to hear. In a loud environment, the AirPods work much better.

Walking Around with AVP

Not advisable. In fact, there should probably be warning mode so that Apple is not liable for any accidents occurred. With that said, I couldn’t help but want to try to walk around with it. I quickly found that it’s not built for that. For example, I wanted to FaceTime my wife while I was walking in the airport. However, the FaceTime screen stood still and then went behind me as I traveled forward. I could hear her voice, but I couldn’t see her. I would like to be able to see her on the FaceTime as I walked much like I do when I’m with her walking in the airport. Currently, this is not possible.

Walking with my Apple Vision Pro at the ATL airport

On the other hand I was very surprised how I could actually walk with the AVP on. I use the custom optical inserts that match my prescriptions for my glasses. The cameras on the AVP allow me to navigate successfully in the airport and on the streets . I noticed that they even work well on a sunny day. Again, I definitely would strongly recommend not using them while driving.

Can I Do Real Work on It?

Yes. But make sure you learn how to turn on Voice Control in the Accessibility menu. You can put it in Command Mode or Dictation Mode (there is also a Spelling Mode but I haven’t got that to work yet for some reason). If you have ever seen the original Star Trek, you remember Captain Kirk could say, “Computer, tell me where the Klingons are” or something like that. You can do this on your Mac as well, but with the AVP, I can tell it to open or close different apps. Move the cursor around or select text with my voice. The better I get with the canned commands or customizing my own, this will make things much more productive as I manage the many screens that I can have open. Another way to visualize how it works is to think of the movie Minority Report where Tom Cruise would use his fingers to swipe different virtual compute screens from the air. Using the AVP is a lot like using that holographic computing device combined with voice commands. But you can use it discreetly by just controlling the cursor with your eyes and tapping your fingers.

If needed, you can use a Bluetooth keyboard like the Apple Magic Keyboard to type. I find this helpful as I’m learning how to use voice dictation and voice commands better. I was surprised to see how the AVP detects that you are using a Apple Magic Keyboard and then puts the text preview above the keys on the keyboard, as though it’s magically attached. Very cool how the AVP tracks and knows the locations of your body, equipment, windows, and other objects.

My augmented reality desktop using the Apple Vision Pro

Your Eyes and Hands as a 3D Mouse

What you don’t initially realize is that the AVP’s phenomenal eye and hand tracking allows you to have a “mouse”, or point and click device, that you allows you to move your windows and objects in a 3D space. With a regular mouse you can move up and down and side to side. With its 3D “mouse”, you can move up, down, side to side, and back and forth. The eye tracking is very accurate. I can look at a precise place that I want to move the cursor and it will move it. I can “pinch” the cursor and move it around a text page very accurately. I did have trouble selecting a phrase of text without getting the last blank character. I ended up using the Command Mode “select phrase” command to accurately select a phrase of text. The cool thing is that the tracking also tracks your lips so that when I’m in a Zoom, FaceTime, or Google Meet, people can see my avatar, or persona, moving its lips in sync with mine. Since I’m not using a laptop with a camera to show me live, Apple has come up with a cool, somewhat creepy, way to make an avatar that tracks your smile, lips, eyes, and head movements so it looks like you. Instead of it being a DeepFake, it’s a RealFake (coined here first :-)) .

Dealing with Motion Sickness

I admit, as a youngster, I would get motion sick riding a bus. I used to get motion sick riding a plane and would take Dramamine. But I got used to it. After my initial demo of the AVP at the Apple Store by a phenomenal worker named Jalen, I noticed that my stomach was a little queezy after I took the AVP off. So I started using it 10 or 20 minutes at a time and started to get used to it. I now can go a “long” time using it. It’s actually more fun to use than a MacBook. I still think it’s advisable to take breaks and rest your eyes. 

Jalen giving me my initial demo of the Apple Vision Pro

Transforming Engineering Education

I could write a series of blogs why I think the AVP can and possibly will transform engineering education. The main reason I believe it can is because it allows the student to experience augmented or virtual reality in such a realistic why, it will make learning experiences more engaging, realistic, exciting, and cheaper. With the new iPhone 15 and the AVP, you can record 3D videos that allow you to be immersed in the experience. I imagine that you could train a surgeon to conduct surgeons without having real patients. We could take our construction engineers to different environments and see how to construct buildings in Antarctica without having to travel there. The warning is that like any technology, people can use things for bad things too. For example, I wouldn’t want to see a video game where the player is stabbed realistically. It could potentially cause a heart attack. I hope that Apple and others make sure that all games and apps come with sufficient warnings and protections.

Checking out an immersive dinosaur 3-D application using the Apple Vision Pro

Opportunity, Equity and Access

If you are an engineering educator, I highly recommend it. I foresee new engineering buildings having labs that are equipped with tons of these. Same with museums. Like the iPhone in its inception, we will need the developers, content creators, and individual users to create the apps and content that will make it useful for education. I hope that public libraries will soon carry them so that they are accessible to anyone in the public no matter their socio-economic status. I was the only one I saw using one publicly in the airport or at a conference. If the AVP takes off, in the next year or two I’ll see hundreds of them. By the way, this blog post was created completely by a human using the Apple Vision Pro.

© 2024 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Artificial Intelligence, Cloud Computing, Design Thinking, Education, Engineering, Innovation, Robotics, STEM, Technology

AI Autonomy Innovation: Teaching Cars to Drive by Themselves

Beneficial AI

Often, we hear about the negative aspects of AI but less about the potential benefits. At The Citadel, our cadets displayed their talents in using deep reinforcement learning algorithms to have cars drive by themselves. In the near future, think of how beneficial to have cars that can autonomously transport persons with disabilities such as blindness or limb dysfunction or the elderly who can no longer drive by themselves. How about having driverless supply vehicles that drive by themselves in a military environment?

Thanks to the support of Amazon Web Services (AWS), we were able to host the first 2023 Senior Military College and Service Academy Warrior Week DeepRacer Tournament. Cadets from The Citadel, the U.S. Military Academy at Westpoint, and the U.S. Naval Academy at Annapolis trained AI models to drive AWS DeepRacer cars all by themselves, or autonomously. DeepRacer is the name of the model car that has a video camera sensor to “see the road” and train an algorithm that can cause the car to drive by itself. The students train the cars virtually on a cloud-based simulator and then download the AI models to the DeepRacer car, which is used to drive the car on a real, or physical, race track. The algorithm they used was deep reinforcement learning.

Citdel Cadets Brian Bradrick and Blakely Odom, Dr. Pooya Niksiar (Coach) and the AWS Team Venkartaraja, Anthony Yimsiriwattana, Abhijeet Patil, Alex Domijan, and Nicholas Costas (not Pictured Cadet Frederick Vogel). Combined Heat Score showing Citadel Teams winning times followed by Cadet Teams from U.S. Military Academy and U.S. Naval Academy

AWS DeepRacer as an AI Autonomy Innovation Teaching Tool

Using DeepRacer as an AI teaching tool, is part of the autonomy innovation that our Center for AI, Algorithmic Integrity, and Autonomy Innovation, or AI3, hopes to bring to our students, community, and state. We hope that we can attract partnerships with major automotive companies in South Carolina, such as Volvo, BMW, and Mercedes to hire our students as interns and full-time engineers. We have also be partnering with AWS Machine Learning University’s AI Educator Enablement Program to put our faculty, include Dr. Pooya Niksiar, coach of our winning DeepRacer team, through Advanced AI and Machine Learning boot camps. This will prepare our faculty, and faculty across the U.S. at community colleges, HBCUs, and minority-serving institutions to teach machine learning at their institutions.

Drs. John Sanders, Nathan Washuta, and Gafar Elamin among the faculty and students in the DeepRacer Tourney audience speaking to team member Cadet Blakely Odom

The Winning Team – The Citadel School of Engineering

Finally, I want to give a shoutout to Dr. Niksiar and his team of three cadets, Frederick Vogel, Electrical and Computer Engineering, Blakely Odom, Mechanical Engineering, and Brian Bradrick, Mechanical Engineering, for taking First Place and Second Place against their competitors. We thank the U.S. Military Academy and the U.S. Naval Academy for their strong competition. We are happy that The Citadel School of Engineering can provide a world-class engineering education here in the low country and the State of South Carolina to serve our state and globe with principled leadership and engineering innovation.

AWS Pit Crew Member Anthony Yimsiriwattana Watching Citadel DeepRacer car navigate the track autonomously

Main Header Picture: Dean Andrew B. Williams, Dr. Pooya Niksiar (Coach), and Cadets Frederick Vogel (ECE), Blakely Odom (MECH), and Brian Bradrick (MECH), The Champions of the Inaugural Senior Military College and Service Academy Warrior Week DeepRacer Tournament

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Design Thinking, Education, Engineering, Innovation, STEM

Draw to Lead: Visual Thinking and Communication in Innovation

“Let whoever may have attained to so much as to have the power of drawing know that he holds a great treasure.” — Michelangelo (not the Teenage Mutant Ninja Turtle one)

Today we had fun in our KEEN School of Engineering Book Club discussing and drawing (literally) from the book, “Draw to Win: A Crash Course on How to Lead, Sell, and Innovate with Your Visual Mind”, by Dan Roam. Our KEEN Book Club this semester is focused on visual thinking and communication. Why? To be innovative and to teach our students to be innovative, we need to tap into all that our brain has to offer to be creative and solve problems. Drawing allows us to connect ideas and create and communicate value, things vital to having an entrepreneurial mindset. And much of what the brain offers us in our innate creative conceptualization abilities through drawing, we often underutilize and fail to reach our ability to fully think and communicate in visual terms. According to the book, 90% of the information on the internet is visual, 90% of the data that ever existed was created in the last 2 years, and 90% of knowledge workers don’t know how to effectively use visuals.

Don’t Be Scared to Draw

In our book club, we talked about as kids, we used to draw all the time or have kids who are keen on being extra observant and engaged with illustrations and images. But somehow when we get older, we compare our scribbles and doodles with others and think, we don’t have the “gift”. But through Dan’s books, I’ve been discovering personally the power of using stick figures to not only communicate big ideas, but to allow them to form before my very eyes. I’ve made two presentations to big groups of professors (one on AI and another on mentoring) using stick figure drawings inspired by Dan’s book. I received comments and encouragement that they were drawn into (no pun into) the talk because the ideas were simple, clear, and uniquely presented.

What Makes a Good Drawing

As Dan says, good images are meaningful pictures that:

  • trigger deep thoughts,
  • clarify complexity, and
  • inspire insight.

For the last few years, I have attended meetings, listened to sermons, and journaled with handwritten text and images. Not only does it allow me to work through thoughts in my head or create designs, it allows me to engage and retain information better and longer. Dan talks about how we often “draw” in our minds and bodies without a pen and pencil. You’ll have to get the book to learn more about that.

Drawing in the Classroom

One of our faculty in our club, Dr. Gafar Elamin, pointed out that for his mechanical engineering senior design class, he has the students sketch five different sketches that show the concept of their potential design. Soon the five sketches turn into twenty sketches, as they inspire new ideas and connections. Another faculty, Dr. Deirdre Ragan, said her honors students were making a presentation and apologized that their drawings weren’t that good. But when she told them, “artisticness” (my own choice of made-up word), isn’t what mattered. It was the ability of the drawing to communicate the idea. After that, the student perked up and “went to town” and excitedly explained their ideas.

Make it Practical

Instead of just verbally communicating our thoughts and ideas in our book discussion, we took time to practice what we were reading. For those who were “non-drawers”, we started with the Mike Rohde and Dan Roam approach of drawing and labeling a dot, circle, triangle, square, and line. Then we showed them Mike’s simple drawings of a fish, hamburger, dog, camera, and others, and allowed them to see how those simple shapes were building blocks to create drawings labeled with words (i.e. sketchnotes). Our faculty remarked how they were having fun drawing, even the one who initially said she couldn’t draw.

More Fun to Come

Our club will explore other “visual mind” books such as the SketchNote Workbook by Mike Rohde and Pencil Me In: The Business Drawing Book for People Who Can’t Draw, by Christina Wodtke. Dan Roam has written many books on the subject of drawing for visual thinking and communication. I was fortunate through my LinkedIn and real life connection with Mike Rohde (author of the Sketchnote Workbook) that I heard about his Back of the Napkin book. Grant Wright was the one who LinkedIn with me after I wrote a blog post about the Kindle Scribe vs. the Remarkable 2 and then later talked about Dan’s “Back of the Napkin” book. As in life, drawing helps us to create connections with people, ideas, and innovation.

Picture: Our KEEN Book Club in the School of Engineering. For the image observant, we ran out of hard copies of the book but have orders for more on the way.

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Education, Engineering, Entrepreneurial Mindset Learning, leadership, STEM

Citadel Engineering: Focused on Students, Not Rankings

So proud of our The Citadel Engineering faculty and staff for the national accolades for their student-focused education efforts. We as a team are focused on delivering the best engineering education experience and value for our cadets and students. The rankings may follow but aren’t our priority or focus. 

🏆 Proud that US News has noted us as a top 25 undergraduate engineering program (non-doctoral) in the nation for the 13th straight year!

🏆 Proud that we are tied as the 4th highest ranked public non-doctoral undergraduate engineering program in the nation. (We are state-funded not federally-funded or private.) 

🏆 Proud to be the #1 engineering undergrad program (non-doctoral) out of the senior military colleges. 

We are humbled because all this is made possible only by the support we receive from our State of South Carolina, industry partners, and alum. Congrats again to our faculty, staff, and leadership team!

#GoDogs!

© 2023 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Ai, Artificial Intelligence, Design Thinking, Education, Engineering, Entrepreneurial Mindset Learning, Innovation, Technology

Looking “Under the Hood” of Generative AI

Is using AI like driving a car? You don’t have to know how to design a car in order to drive it around town. Back in the day, we had a whole class called “Driver’s Ed” taken during junior high school. In that class, we learned the rules of the road and got to sit behind the driver’s wheel for the first time and drive. My Dad, on the other hand, not only knew how to drive a car, but he would buy old cars (because we couldn’t afford a new one), take out the old engine, rebuild it, and put it back in. Using AI or machine learning algorithms can be similar to driving a car. Diving deeper into them would be more like repairing or designing a car. You don’t have to know how to repair or design a car just to drive it.

Driving “Artificial Intelligence” programs, or “Driving the Car”

Learning and using artificial intelligence is like driving a car. Nowadays, you don’t need to know how to create the algorithms yourself, you can just “drive them.” Like driving a car, you still need to know the rules of the road, how to evaluate where you are and where you are going, and be safe. But you don’t need to know all the intricacies of how an internal combustion engineer, and electric motor, to be able to design it. Back when I first started studying AI during my master’s in 1991, we had to either write our own AI code or find open-source software that we could add to. Today, with platforms like AWS Sagemaker, the algorithms are already coded. You can access and use them if you know which ones to use and how to string them together in Python code in a sequential fashion. You just need to read or take a class on how to use them. Thankfully, for educators at Community Colleges, HBCUs, Minority Serving and some PUI Institutions, AWS has set up an AI Educator Enablement program. I’m happy that five of our faculty have begun taking the AI Bootcamps offered in conjunction with The Coding School. This semester, I’m teaching a class modeled after AWS Machine Learning University’s Machine Learning Through Application course.

Designing “Artificial Intelligence” Programs, or “Designing the Car”

Learning artificial intelligence can also be like learning to design a car. Many of us are now familiar with ChatGPT, a conversational generative pre-trained transformer, that can generate new text for us based on our typed in prompt. Recently, I took DeepLearning.AI and Amazon Web Services (AWS) Coursera course on Generative AI with Large Language Models and was able to get the certificate of successful completion at the end. This course was more closely related to getting “under the hood” of the car and seeing what makes it work. I thoroughly enjoyed the course. I share a few things I liked.

Understanding the Generative AI Product Lifecyle

The GenAI course described the entire GenAI lifecycle including defining the scope, selecting the LLM to use, adapting and aligning the model, and application integration. The first part of this life cycle is understanding the use cases in higher education. We have to start with where it makes sense to use an LLM in an engineering course, for example. I’m excited that tomorrow we have our “Safely Exploring Generative AI for Faculty and Student Learning” design thinking session that is supported by the Kern Family Foundation and our new virtual Center for Artificial Intelligence, Algorithmic Integrity, and Autonomy Innovation (AI3). We have faculty from all five of our Schools (Engineering, Business, Humanities, Math and Science, and Education) coming representing about fifteen departments across campus. It’s imperative that all of our faculty begin thinking about the impact of GenAI on education generally and for engineering, how it will impact the way we prepare our engineers.

Pre-Training a Large Language Model

In the early days of AI, intelligent agents, or AI-enabled computer programs, were designed to “reason” symbolically using logic and inference engines. Today, the “reasoning” and “learning” in AI are done using statistical methods. In the course, LLMs are described as statistical calculators. LLMs take in large amounts of unstructured data, for example, from the internet, pass the data through a data quality filter, and create the LLM by running the pre-training algorithm on GPUs by updating the LLM weights.

How Pre-Training Works

LLMs essentially are trained to guess the next word in the text using a Transformer architecture, specified in the paper, “Attention is all you need.” A transformer consists of an encoder and a decoder. Depending on the purpose of the task, you can have encoder-only models, encoder-decoder models, and decoder-only models.

  • Autoencoder models, or encoder-only models, take the input words, or tokens, and try to learn to guess the masked tokens using a bi-directional context. They are good at tasks such as sentiment analysis, recognizing named entities, and classifying words. Example models are: BERT and Roberta.
  • Autoregressive models are decoder-only models and attempt to predict the next token in a text using a one-directional context. These are good for generating text and other types of tasks. This is the type of model GPT is. Example models are: GPT and BLOOM.
  • Sequence-to-Sequence models masks, or hides, random sequences of input tokens through the encoder. The decoder tries to reconstruct the span, or sequence of tokens, autoregressively. These are good for summarizing text, doing question and answering, and translating text. Example models include T5 and BART.

Tasks LLMs Can Do Well

Existing LLMs can do many tasks relatively well. These tasks include:

  • Essay writing
  • Language translation
  • Document summarization
  • Information retrieval
  • Actioncalls to external applications

Prompt Engineering Won’t Always Improve an LLMs Results

Those that are “driving the car” of LLMs know that they can specify what result they want to see using a prompt. There is a way to configure the LLM for the amount of randomness or length of response by modifying parameters for inference, including top k, top p, temperature, and max tokens. Modifying or writing a more complex prompt using a basic knowledge of how the LLM works can improve the results. This is called in-context learning and can involve giving examples of the prompt and the results. Giving no extra examples is called zero-shot inference, and giving one is called one-shot inference. Again, these things are covered in the DeepLearning.AI and AWS course, but I thought I’d mention them. When we start diving more into some of the theories, we are starting to get “under the hood” rather than just “driving the car.”

The Computational Costs for LLMs

Another aspect of the course that I liked is that it delved into a straightforward explanation of how much computing costs are involved. Those familiar with machine learning and cloud computing know that Nvidia GPU’s are the hardware engines that do all the compute processing required to train LLMs. The course helps us to realize that these ML algorithms in general, and LLMs, specifically require lots of computational processing power. A business or a highered ed institution conducting research will have to factor in these costs.

Techniques for Fine-Tuning the LLM

The course covers the methods used to fine-tune the LLM so that it can perform better at specific types of tasks. Although most casual LLM users are only familiar with the GPT models, there are others that exist and can be used. I just noticed that this blog post is getting long so I’ll end here.

Be Happy to “Drive AI” but Be Willing to Dive Deeper

In order to use AI, machine learning, or generative AI models, like LLMs, you don’t need to know everything under the hood. But learning how these models work will be helpful. Many people are complaining that GPT’s aren’t good at math. If you understand the architecture, you can see that they aren’t built for that. But they can be tied in with other applications that can do those things. I am hoping that as engineering educators, we can bring more understanding of AI, ML, and GenAI to the general public but also training others to design and build the next generation AI algorithms.

Picture: Participants in a recent, Safely Exploring Generative AI for Faculty and Student Learning – Using Design Thinking and Entrepreurial Mindset, sponsored by The Kern Family Foundation.

© 2023 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Education, Engineering, Entrepreneurial Mindset Learning, Innovation, leadership, STEM, Technology

Capturing Purpose and Passion in a Mission Statement

One of our faculty courageously stated in our meeting today something to the effect, “I hate to say it. But why do mission statements sound so generic and lack passion and love? As a parent, I’ve seen a lot of these college mission statements, and I’m not sure I’ve seen many that connect with my child or with me as a parent.” Today, we hit that head-on with the help of Dr. Sonia Alvarez-Robinson. We were honored to have such a seasoned strategy consultant work with our School of Engineering to begin our new strategic planning process. If we do nothing else but refine and capture what was shared today in our breakout session, we will have succeeded.

I’m not going to give a rundown of the steps we took today but talk more about why we took them and the energy that we felt. As people shared their personal “why’s” for why we teach to students, for example, someone literally talked about having love for our students and a love of the discipline they are teaching. We also heard comments that empathized with how our students engage with and experience their engineering curriculum at our institution. To that point, another faculty member added that we are starting a new class for first-year engineering students that combines all of our disciplines so that students can be exposed to each one before making a long-term commitment to a major.

We also shared how it is important to give them the fundamentals, but also show them how to work with others in other’s disciplines on big problems that must be solved on a bigger, and sometimes global scale. Some of our Executive Advisory Board members, who themselves are alumni and executives in large engineering firms, stated how important interdisciplinary collaboration in the real world is. Interdisciplinary collaboration is one of our four strategic priorities, along with innovation throughout the curriculum, infrastructure for growth, and inclusion and outreach.

At the end of the session, we gave everyone the chance to share one word about how they felt about our session developed by Dr. Alvarez-Robinson to co-create our vision and mission. Words such as encouraged, informative, insightful, tiring, helpful, collaborative, productive, interesting, contemplative, and inspired were spoken. My word? Energized.

Picture: Many of our School of Engineering Faculty and Staff with Dr. Sonia Alvarez-Robinson (in blue) at our Initial 5-Year Strategic Planning session. (Thank you Michael Kelsh for taking the picture for us.)

© 2023 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Ai, Artificial Intelligence, Computer Science, Design Thinking, diversity, Education, Engineering, Innovation, leadership, Robotics, STEM, Technology

V.E.R.B. Mentorship: Taking Action to Mentor Others in AI, Computing, and Engineering

If we are willing to admit it, we have all benefited from some form of mentoring, either informal or formal mentoring. I was reminded this summer of the joy I receive from mentoring at very little cost to myself. My story of mentoring is focused on Ronald Moore, a former summer undergraduate research student in my lab, who is now a current Ph.D. student at Emory in the Computer Science department working this summer as an Amazon Web Services (AWS) Applied Science Intern.

A Vision to Help Others Through Tech

I had met Ronald when he was probably in elementary school and reconnected with him during a summer while he was finishing his degree in Electrical Engineering at the University of Pennsylvania, where he had played at least one season on the football team. He didn’t have anything planned for the summer. I encouraged him to consider working in my research lab, the Humanoid Engineering and Intelligent Robotics (HEIR) Lab, at Marquette University in 2015. He hadn’t really had much programming experience, so I started him on a project to build small robot kits using Intel’s version of an Arduino, that could be used to teach middle school girls from underrepresented backgrounds how to build and program robots. It was a learning experience for Ronald. But he also learned the lesson that he could use technology to help others engage in and benefit from computing and engineering.

Expecting Success Even through Failure from Your Mentee

It’s important to raise the expectations of my mentees for what I expect from them. But more importantly, what they expect from themselves. My approach to mentorship is to be available for advice and guidance but to put the bulk of what needs to be discovered and done on the student I am mentoring. I find that my students can quickly come up to speed on subjects and often surpass what I know if they are given an engaging project and plenty of focused time to work. That summer seemed to spark Ronald’s interest even more in robotics. He had a few setbacks and sometimes progress was slow but he was encouraged to persist.

Resources and Relationships to Grow

Mentors realize that they don’t have all the knowledge and experience to effectively develop the mentee. I encouraged Ronald to go to graduate school. I think he took some time working with friends that had a startup but then decided to pursue a master’s in computer science, somewhat of a big change from pure electrical engineering. Ronald followed me to the University of Kansas and began his master’s research in human-robot interaction with me as his advisor. I taught him his first artificial intelligence class as well as an HRI class. It was exciting to see him progress. He also became involved in our IHAWKe (Indigenous, Hispanic, African-American, Women KU engineering) program, as a graduate student leader. He later applied to become a GEM Fellow (which I suggested to him), was selected, and eventually interned with IBM. I was a GEM Fellow myself, and so it was rewarding to see him apply and become a GEM Fellow as well.

A Belief Instilled and Pursued

As he discussed his future, I was able to share the pros and cons of pursuing a Ph.D. He applied and was selected into the Ph.D. program at Emory University in Computer Science with IBM as his sponsor. I could tell that, like myself, it was a challenge starting his program. What I really give him credit for was how he would contact me at least monthly to set up times that we could Zoom and ask any question he wanted. We often found some of that time either cheering or lamenting the KU Jayhawks or the KC Chiefs. Ron, to this day, continues to do his part as a mentee to initiate times when he wants to talk for advice or be a sounding board. I have benefited because he shares his research, but more importantly, I have the joy of seeing him grow as a young man and as an AI researcher.

A Goal: Going Beyond the Mentor

He saw my involvement with AWS through various educational projects and programs. This summer he decided to apply to become an intern at AWS. I’m proud to say that he was hired as an Applied Science Intern with AWS. He’a using his expertise but also expanding his research knowledge and space. Ron’s Ph.D. research at Emory is centered around developing methods that reduce bias in clinical risk prediction and treatment effect estimation algorithms. You can only imagine what he’s working on now. Let me just say, he’s going way beyond what I’ve done or know.

V.E.R.B. is an Action Word for Mentorship

So here’s how I help lead, inspire, and mentor others: V.E.R.B., an acronym that I developed and have used over the years.

VISION – help them see the possible and dream of the unseen in their own lives and the lives of others.

EXPECTATION – give them reasonably high expectations for them and positive expectations of success, even if there are slight setbacks along the way.

RESOURCES/RELATIONSHIPS – provide them with the resources they need to succeed and connect them with others who will help build up their experiences and expertise.

BELIEF/BELONGING -believe in them before they even believe in themselves, know that they belong in their chosen field, and continue to cultivate their own belief in their capabilities, potential, and possibilities.

To all my past and current mentors, I continue to dedicate myself to mentoring others. And those who want to be mentored by me only need to contact me and show the type of commitment to learning and growth that Ronald continues to show. Ronald is using his research abilities to help others avoid problems with machine learning bias and to benefit from how ML can be used to help clinicians and nurses heal others more effectively. And that’s a mentoring V.E.R.B. worth acting on.

Picture: Ronald Moore in front of his research poster at the 2019 ACM/IEEE International Conference on Human-Robot Interaction in Daegu, South Korea, as a student of mine in the Humanoid Engineering and Intelligent Robotics (HEIR) Lab at the University of Kansas.

© 2023 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Ai, Artificial Intelligence, Cloud Computing, Computer Science, Design Thinking, Education, Engineering, Entrepreneurial Mindset Learning, Entrepreneurship, Innovation, Robotics, STEM, Technology

Safely Exploring Generative AI for Faculty and Student Learning

How is generative AI going to impact your career? How is it going to impact engineering educators and students’ learning experience? We are planning a faculty design thinking session to explore Generative AI (GenAI) for how it can be used to help students learn. Why? Everyone is curious about it, from students to CEOs, and we are all trying to figure out how it connects to what and how we teach, and how we can create value for our students by using GenAI. We are trying to identify the opportunity it presents for engineering educators and how we can scale it best for impact. Sound familiar?

What do we mean by “safely?” We recognize that most faculty may not have a background in AI and rely on what they hear from others. Many teachers are worrying about how to keep students from using it to cheat. There are many other fears that faculty and students have about AI. There is a feeling and fear of being left behind the technology curve and losing relevance in their future or current careers. Many students that are near graduation are worried they are not prepared for an AI-enhanced workforce. Faculty that have not been in the engineering industry for a while or possibly ever, may be unaware how GenAI is impacting the workforce or the military. Also, by “safely” we want to make sure that as educators, we make sure we limit the exposure of toxic GenAI output, respect intellectual property, and discern accurate and honest content. Faculty and students must learn how to use GenAI responsibly.

We are going to get all of these issues out on the table in a comfortable and open intellectual space. We are going to use design thinking to empathize with faculty and students. We are going to clearly define the needs, pains, and potential gains that faculty and students have related to AI in general and specifically to GenAI. We’ll take time to brainstorm potential solutions or “products.” We will be able to build some teaching prototypes and feedback on our ideas. Trust me, it will be a “safe” space to explore the technology’s impacts and learn how we can tackle the challenges together.

We are delighted to have Christina Hnova, from the University of Maryland Academy for Innovation and Entrepreneurship, facilitate our session. I met Christina through my past instructor at the Stanford d.school Teaching and Learning Studio, Dr. Leticia Brito Cavagnaro. Through Amazon Web Services (AWS) Machine Learning University and AI Educator Enablement Program, we have been able to begin preparing many of our faculty in the School of Engineering to teach and integrate AI. But we are also aiming to help faculty in other schools, including the Humanities, Business, Education, and Science, explore GenAI with curiosity, connections, and value creations for an interdisciplinary AI student learning experience. Come join us!

Picture: A curious learner with Kathleen, one of my Humanoid Engineering and Intelligent Robotics (HEIR) Lab undergraduate research students, during an AI-enabled humanoid robotics outreach event when I was at Marquette University.

Acknowledgments: Thanks to KEEN and the Kern Family Foundation for their support!

© 2023 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner. He is proud to have recently received a Generative AI for Large Language Models certification from DeepLearning.AI and AWS.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard
Ai, Artificial Intelligence, Cloud Computing, Computer Science, Design Thinking, diversity, Education, Engineering, Entrepreneurial Mindset Learning, Innovation, STEM, Technology

Using Entrepreneurial Mindset and Making to Spark More Accessible AI

What do you think this is a picture of? How would you imagine it connects with a student learning artificial intelligence? A little more on that later but let me share a little on the process of why and how I arrived at this little contraption to teach some AI and machine learning concepts.

KEEN MakerSpark: A Framework for Developing Entrepreneurial Mindset Activities

This week I participated in the KEEN MakerSpark workshop. We looked at how to use “making” and the three C’s of an entrepreneurial mindset (curiosity, connections, and creating value) to improve our engineering curriculum. Since I teach AI and machine learning, I was curious how I could use “making” to visually and tactile”ly” demonstrate how machine learning works to a college student or even a child. In this context, “making” refers to physically making something with your hands. In the context of the 3 C’s, the making is driven by a student’s curiosity, their need to make connections from disparate information, and prototyping a concept or an idea to create value.

Deconstruct/Reconstruct Troublesome Knowledge

In teaching, we often want students to learn a new concept. But there is more to teaching a concept then giving a student a definition, equation, or example. We need to deconstruct all that we know that the student knows and how they arrive at that concept. Troublesome knowledge consists of those engineering or computing concepts that our students seem to struggle with the most. Working backwards, starting at this troublesome knowledge, we then design a learning activity using objectives with observable outcomes and ways to measure their learning.

I identified what makes some introductory AI knowledge “troublesome.” Students may not know that machines can “learn”. Students may not understand the different ways machines learn. Yes, at this point I could put up some complex math equations that explain machine learning, but what does this mean to a middle school student trying to learn the basics in a visual and tactile manner?

Defining Success and Struggling in Learning

Working backwards, we can define what concept we want to and things we can observe that shows that they have mastered the concept or not. These learning objectives should state clearly what we want the student to learn, by when (e.g. end of class), and what’s the observable way of telling they learned it. A concept in AI that I want students to learn is what is unsupervised machine learning classification versus supervised machine learning classification. They are struggling if they can’t identify what it is visually.

Modeling the Knowledge Using Analogies, Sketches, Data Physicalization, or Stories

To ask myself how I could model this knowledge, I thought of analogies to unsupervised/supervised machine learning classification. I thought of analogies, metaphors, similies, and stories and drew sketches. I won’t list them here, but it involved me drawing sketches with stick figures and drawing what I know about this topic. I then brainstormed ideas about how to physically show the data as it flows through a machine learning classifier or neural network, or some other teaching tool or experimental model. I picked one of the example ideas and decided I would make a simple “maker” exercise for students to try. Hence, the “contraption” I made in the picture with a tube, holes, and small BB and marble-sized balls of different colors. The fun part of this process is that the instructor gets to “make” a low-fidelity prototype proof of concept that will guide what the instructor will then instruct the students to “make,” but not necessarily the same prototype. In other cases, the instructor’s prototype will be the basis of the learning activity the students use. For example, one of the faculty, Mark Ryan, created a prototype game to teach “for” and “if” loops for non-computer scientists.

Prompting the Student to Make Prototypes and Use them to Assess their Learning

After explaining the concept of unsupervised/supervised machine learning classification, I would prompt the student to make something that demonstrates the concept. I wouldn’t want to give them the answer but be there to give them hints and clues and positive encouragement to think of analogies and metaphors themselves. I would instruct and encourage them to use the low-fidelity prototype materials (a.k.a. craft supplies) to build their prototypes and test them on other students. If I’m being kind of vague, it’s because I want to try this out on some of our students this fall to see what I learn first.

Innovating throughout our Engineering Curriculum

I am grateful for the many teaching innovations that I have been able to experience and learn through workshops like Stanford’s d.School’s Teaching and Learning Studio and the KEEN Network’s MakerSpark I just went through in Boston, which I learned these concepts through that I’m able to share. As an engineering leader, I’m grateful that the Kern Family Foundation provides these opportunities for all of our faculty to learn to innovate in their classrooms from other faculty in the KEEN Network. The opportunity is there, and it’s up to us to seize them and make them a reality in our students’ learning experiences.

Picture: A low-fidelity, hands-on teaching model for students to use data physicalization and making to learn the concept unsupervised and supervised machine learning classification.

© 2023 Andrew B. Williams

About the Author: Andrew B. Williams is Dean of Engineering and Louis S. LeTellier Chair for The Citadel School of Engineering. He was recently named on of Business Insider’s Cloudverse 100 and humbly holds the designation of AWS Education Champion. He sits on the AWS Machine Learning Advisory Board and is a certified AWS Cloud Practitioner.  Andrew has also held positions at Spelman College, University of Kansas, University of Iowa, Marquette University, Apple, GE, and Allied Signal Aerospace Company.  He is author of the book, Out of the Box: Building Robots, Transforming Lives.

Standard