AI Image Generation for Educators

Dive into the world of AI-powered image generation and discover how this cutting-edge technology can transform your classroom. Join us as we explore several AI image generation tools and gain practical strategies for crafting effective image generation prompts.

Attendees gained valuable insights into:

  • AI image generation tools suitable for educational settings

  • Best practices for writing prompts that yield high-quality educational images

  • Innovative use cases for integrating AI-generated images into lesson plans and projects

  • Practical tips for using AI-generated images to support diverse learning needs and boost student engagement

AI Summary Notes:

The webinar focused on the use of image generation in education, discussing various tools and ethical considerations. Key points included the explanation of how image generators work, popular image generator tools, ethical considerations and limitations, writing effective prompts, live demonstration and feedback, and classroom applications.

🎨 Image Generation in Education (01:30 - 05:13)

  • Discussion on the use of image generation for creativity in lesson planning.

  • Examples of using image generation for visual writing prompts and custom images.

  • Engaging students with creative tasks using image generation.

🖼️ How Image Generators Work (05:13 - 10:35)

  • Explanation of how generative AI and image generators function.

  • Discussion on the diffusion approach to image creation.

  • Examples of different image generators and their strengths.

💡 Popular Image Generators (10:36 - 16:25)

  • Overview of popular image generators like Ideogram, DALL-E, Google Gemini, Microsoft Designer, Adobe Firefly, Canva Magic Media, MidJourney, and Leonardo AI.

  • Discussion on the strengths and limitations of each tool.

  • Emphasis on free and accessible tools for educators.

⚠️ Ethical Considerations and Limitations (16:25 - 22:27)

  • Discussion on bias in image generation models.

  • Concerns about cultural representation, historical and scientific accuracy.

  • Issues related to copyright and ethical use of generative AI.

  • Potential for misuse, including deepfakes and explicit images.

📝 Writing Effective Prompts (22:27 - 33:34)

  • Tips for writing effective image prompts.

  • Importance of being specific and detailed in prompts.

  • Examples of well-structured prompts for different styles and subjects.

🔄 Live Demonstration and Feedback (33:34 - 41:56)

  • Live demonstration of creating images using different tools.

  • Participants suggest prompts and see real-time results.

  • Discussion on refining prompts and troubleshooting issues.

📚 Classroom Applications (41:56 - 52:48)

  • Ideas for using image generation in the classroom.

  • Examples include illustrating stories, visual definitions, and creating custom classroom materials.

  • Encouragement for educators to experiment with these tools.

  • Amanda Bickerstaff

    Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.

    Mandy DePriest

    Amanda is a Curriculum and Content Developer at AI for Education. She has over 15 years of experience in public education, having served as a classroom teacher, library media specialist, and instructional coach. She has also taught education technology courses in higher education settings as well as professional development workshops for teachers on the transformative power of technology. She is committed to ensuring that students are prepared for the dynamic demands of the future by leveraging the power of technology-driven instruction.

  • Amanda Bickerstaff
    Hi, everybody. Welcome to our webinar. We're super excited to have you here with us today. We are going to be focusing on something that we've been having a lot of fun here at AI for education on, which is image generation. We've done a lot of work on kind of text prompts and, you know, really focusing on lesson planning and differentiation, but we just think that the ultimate use case for generative AI right now is creativity. And so we're going to be focusing on all free chat bots that are free, excuse me, free image generators. And we're really excited. So we're going to give everyone just another minute to come in. But if you have been here before or your first time, say hello in the chat, say where you're from, what you can like. Let's make sure that everyone has access to that.

    Amanda Bickerstaff
    We do. So say hello where you're from, where you're coming from, and if you have a favorite text to image generator, share it. Let us know what you use. If you've never used it before, we'll do it together today. But we're just really excited to have you here already. We always love our international crowd, which is super awesome. And yeah, we're gonna get started. So, Amanda, can you. Well, okay, first of all, we are now going by Amanda and mandy, which is really funny because we both also go by Amanda and mandy, but we're going to be Amanda and Mandy dupries, who is our amazing curriculum and content developer. You want to say hi, Amanda? Oh, gosh. We're going to do this. Mandy.

    Mandy DePriest
    Hi, everyone. It's so exciting to be here with you today. We're going to have some fun.

    Amanda Bickerstaff
    Awesome. And so we're going to go to the next slide and we're going to look at how to set the scene. So if this is the first time you've been with us or you've come here before or you're watching it at home on a recording, we want you to really get involved. So we love the fact that everyone's saying hello already and just a short introduction. We want you to also just try it on. This is such a cool opportunity to prompt with us. We'll be giving you some tips and tricks on how to get the most out of image generation in general, but also how you can think about applying it to your practice. And if you have an amazing resource.

    Amanda Bickerstaff
    Let's say that you have your favorite prompt, or you have a good resource on this, or you want to share, please do so in the chat. We are a community of practice. It is our favorite thing that we get to do with our webinars is to build a community of practice. And so please feel free to see it the same way and to share with each other. So we're going to move to the next slide. So, yeah, image generation, this is the cool thing, is that all of a sudden we don't just have generative AI that can help you with, well, lesson planning and with creating new communications, but also it can do video. So if you have Luma tried Luma, they have a new video generation tool that's free to use.

    Amanda Bickerstaff
    You can make music with Suno, you can do all these amazing things. One of our favorite use cases is actually image generation. And we love image generation for a couple of reasons. Number one is that we actually use imagery with our students all the time. And a lot of times we have to do is go to the Internet and scour, oh, gosh, google images, like, to find the right image, or we're trying on canva or all these things. And often these images don't actually really fit what we need, just in general. So that's one. But also the opportunity to start really driving creative thinking in your classroom can be so kick started and, like, absolutely supported by image generation. So what we really love to do is think about it as ways in which you can transform lessons into creative tasks.

    Amanda Bickerstaff
    So, Amanda, do you want to talk a little bit about Matt Miller's approach in his language classes?

    Mandy DePriest
    Yeah, he was just on the Generative Age podcast, which is great. Give it a listen. Talking about how he used it with his spanish language students to use, like, as a visual writing prompt. So they would have to describe the image in Spanish based on what they were learning, but he was able to really customize it based on what they've been studying. Or, I mean, you could use this where the students are creating the prompt as well, trying to get a specific image in the other language using a prompt and an AI image generator. So there are endless applications. We're going to look at some other use cases later in the webinar, but it's a great opportunity not only for you to exercise your creativity, but also for your students as well.

    Amanda Bickerstaff
    Absolutely. And I love this idea of also, like, taking a creative writing prompt and having the kids come up with their own image and then have to write about it. Because these image generators are wild. They're so interesting, and sometimes they're really unexpected, which we'll see in a moment. And so there is that opportunity to have them do both sides, the creative side of building what they're going to be talking about or writing about and then actually doing it. The second is engaging students with custom images. Let's say that you're an elementary school teacher and you're doing some work on the ABCs of what's the kid's interest? Are they into their school mascot? Are you focusing on the environment or climate? You can actually create custom images for the ABCs, for students themselves, even for your classroom.

    Amanda Bickerstaff
    But even down to the individual student, you can also think about making learning visual. That could be in two ways, that can be in you helping to create images to do that, but also students being tasked to actually show their thinking invisible ways. So having them use image generation to explain how they're feeling. So Mandy had a great idea for one of our PD's, which was for an image generator. We just learned about a whole day on generative AI, and everyone's brains were going. And so we had a prompt of create an image that expresses how you're feeling right now. And what was really interesting is we had some people. The one that won, in this case, wondehood, was an image of one of the leaders that had a really excited version of herself and a really confused version of herself.

    Amanda Bickerstaff
    And what it showed is that there was this real excitement, but also that she was struggling to really understand how to do this. And something like that is making her thinking visible. It's very evocative of what's going on. And it's something that can be really beautifully a companion, not just to us, but also to students that potentially are nonverbal or struggle with actually expressing themselves through language. You can do that through image now. And the last thing is, of course, one of the things that really can be helpful always, is that with generative AI, you can have time saving. Like we talked about, going into Google images and trying to find that perfect image could take you a long time.

    Amanda Bickerstaff
    Or you can spend about ten minutes fooling around with one of these image generators we're going to look at in a moment and start to build those images much more quickly. So that's what we think about using it. If you have a great use case, put it in the chat. I know that we're sharing what we like, but also if you want to share a great use case, please do that right now. We're going to go to the next slide, and we're going to talk a little bit about how they work. Just like all generative AI. Generative AI is going to work by taking an enormous amount of data. In this case, image generators look at stock photos primarily, or the images that we put on social media, pretty much all images that are available on the Internet.

    Amanda Bickerstaff
    And what it does is it creates through a neural network, it's trained on these images. And essentially, the images are like, here is a cat, which we'll look at later. And then it'd be a picture of a cat, and then a word of a cat, and they do self supervised learning. And then over time, the models are able to not just identify the idea, like create a cat, but also use a cat as a part of a bigger whole in some cases. So actually creating an image around that, we. So we have this happening, and then what that does is that based on the prompt that you give it, so it still matters how you prompt the tool. The model, or the case of this, we can talk about the image generator are going to use a diffusion approach to structure the image.

    Amanda Bickerstaff
    So what you're going to actually see is on the very, like, on the. And you see that? Is it a. Is that. What is that? Is it a random.

    Mandy DePriest
    It's a karate kid, Amanda.

    Amanda Bickerstaff
    Oh, no. Oh, man. Our first. Everybody, a karate kid.

    Mandy DePriest
    I came up with that myself.

    Amanda Bickerstaff
    You did?

    Mandy DePriest
    I was proud of myself.

    Amanda Bickerstaff
    Mandy. A high five. How funny is that? Our karate kid. You can see that the diffusion model is actually building this out of pixels. So you can see that this noise is building it pixel after pixel. And that's why it takes a little bit of time. If you hit your prompt, it is actually taking this tool, and then what it's doing is actually creating pixel, pixel. And then you have your karate kid, which, man, I'm glad I didn't know that before that. That's so funny. But it is interesting. So if you think about a correlate between a text generator, where it does every. Like, it's every word is the next in an image generator, it's going to be every pixel, and we're going to start to see that over time, this will get faster and there'll be different models.

    Amanda Bickerstaff
    But right now, if you think about the way these work, is that all that training data, it creates this understanding of what you're asking for through your prompt and its training, and then it's building that image through pixels itself. Next slide. So I'm going to hand this over to Mandy, and so we're going to. I know these have already been dropped, all of these, but do you want to talk through the different models you can use?

    Mandy DePriest
    Yeah, so we have spent a lot of time playing around on different models. There are way more than these, but these are kind of the big ones, and so all of these are free or accessible in some way on a free plan. So we wanted to point them out. Ideogram is one of our favorites. It is very strong in these kind of detailed, photo realistic renderings. It can get into fantastical imagery as well, but it's really great with text, which we'll talk about a little later on can be tricky for AI image generators. So if you want to generate something where someone's holding up a sign or has a speech bubble, ideogram is the way to go. Dolly is incorporated into chat GPT 4.0, so you can access it there.

    Mandy DePriest
    We'll demonstrate that a little bit later too, but you can get really great images from it. Google Gemini gets a little rocky with their images sometimes. They got in a little trouble recently, you may have heard, so they're not currently generating images of people, but they can still do all of the other image styles and non human elements. Sometimes Gemini forgets that it can do pictures and I have to remind it. That's the case when I'm using Dolly in chat GBT 4.0 as well. So you just have to just gently nudge it. Say try again. Yes, you can. And you'll get a nice cartoon bear. Microsoft designer and copilot is a great image generator. They'll give you four actually to choose from based on your prompt. That's my nice head of lettuce there.

    Mandy DePriest
    We're trying to showcase the different varieties of styles that you can do artistically. Photographs, paintings, cartoons, things like that. Love Adobe Firefly. It has some wonderful built in tools to help you get a nice prompt with a specific artistic effect or filter or rendering. That's it in the bottom left with nice spooky castle. Canva Magic Media if you're using the free canva for educators account, has the capability to make pictures. It's not as robust as some of the other ones. It can't handle a lot of really detailed, specific photo realistic prompts, but it's really great with cartoons. And if you just want something that you can quickly integrate directly into a design you're making in canva, it's a great option. I've got my little anime guy here. Mid journey is a little bit different. The other ones are all text to image prompts.

    Mandy DePriest
    Mid journey requires you to structure your prompt a little bit differently, and so there's a bit of a learning curve. But if you can get into it and play around with it, you can get some really incredible photorealistic images. People love mid journey, so we're not going to super emphasize it in this training because it does have that kind of threshold to entry. But you can't do an image generation webinar without mentioning mid journey. So we just wanted to throw it out there as an option. If you are comfortable with a more advanced tool, and then Leonardo AI is another one. It's really great with people sometimes. For me, it struggled with non human things, but its faces were always really strong. I've got Henry VIII here talking on a smartphone, and I was really happy with how that one came out.

    Mandy DePriest
    You may know some other great image generators. Drop them in the chat if you use something else that you love. These are kind of the main ones that we wanted to focus on.

    Amanda Bickerstaff
    Absolutely. And I want to just show that I have, if anybody went to iste, this is Adobe Fireflies. You can see them. This is Stardust. He's got a cute tail. So Stardust is actually. There we go. We have to show it. I'm very excited about this. These are in very strong demand, I think, at one of 20. But Stardust also is a companion within Adobe Firefly, which is free for teachers that is kind of their assistant. But I think that if we go back to the slide, I do appreciate that you came off. What you can see is a couple of things. One of the things I just want to point out is that mandy and our team has gotten really good at this.

    Amanda Bickerstaff
    And so we have seen the fact that we can start to update and kind of have some funny, slightly cool, slightly silly things that we infuse into our professional development. But it actually, we're going to teach you how to do that. And so that you can start to think about, like, the level to which we're going to pick. What we use is going to be probably more specific that you are, if you're just getting started. But I can say right now, if you get really comfortable with ideogram, you could do pretty much all these styles if you have the right approach. And so what we want you to do always in these trainings is we're going to give you, like, the foundation.

    Amanda Bickerstaff
    But if you want to start with one that you feel comfortable on, and then you can expand you don't have to use all eight of these, but just use one and learn how to use that well because it will translate to the others. So we're going to actually show you what's available. Like I said, here at AI for education, we are equity focused. We will never train you on anything that you have to pay for. So, Amanda, you want to. Amanda, do you want to take us through these pieces?

    Mandy DePriest
    Yeah. So these all offer a certain amount of usage on their free plan each day, ideogram will give you 20 images a day. If their site is very busy, they're experiencing a lot of traffic. If you're on the free plan, you're going to get bumped and they're going to have to tell you to come back later. Paid users do get priority, but provided everything's good, you can get 20 images a day, and that has always been more than enough for my purposes. Firefly is free as well. You can get 25 images a month. I believe educators can get in and get a free plan, and you might be able to get more than that. Google Gemini is a little secretive. They do have image limits, but I could not ever find an explicit statement of what that is.

    Mandy DePriest
    They just tell you when you've reached it. So I think it probably varies based on usage each day. I've. I've never actually hit it myself and I always do images in batches, like, you know, 810, twelve before, and I've never run into an issue. Copilot and Canva magic again have no stated limit, but I'm sure they'll tell you when you hit it. I've never hit it myself personally. Mid journey, if you dive into that realm, will give you 15 images a day. And then Leonardo operates a little differently. They have a token system where on the free plan, you get 150 tokens per day.

    Mandy DePriest
    All the images I've created on Leonardo have been 14 tokens, but that seems like kind of an odd number, so I'm wondering if I gave it maybe a really elaborate prompt, it might charge me more tokens, but with those 14 tokens per image, I can get easily ten images a day. And again, that's always been more than sufficient for my needs.

    Amanda Bickerstaff
    I am muted. Amanda, can we go back up? I think we just have to clarify something. Images don't necessarily mean one ideogram is actually 80 images, because they do is they give you a block of four, and so you get to pick which ones you like, you can remix them, you can change them up. And so those are like just the generations, but they will give you selections that you can look at. And so it's really interesting. Brandon, perfect question asking about copyright. So that is, we can't talk about generative AI here at AI for education without talking about the ethical components. So if we want to go to the next slide, we're going to talk about these limitations and concerns. And that is quite a strong image that you have there, Mandy. But first, is just always bias.

    Amanda Bickerstaff
    And so stock photos are incredibly biased. If you go to a stock photo and look up a teacher, you're going to see a lot of teachers looking exactly the same way with the same types of classrooms that are going to have inherent bias. So the image generators that we have bias as well. Cultural representation is really tricky. Again, remember, these tools work on probabilities. And so if there are very specific, which there are very specific cultural representations in our imagery, in our cultures, unless those are very online and part of the training data, it's very difficult to have that represented in an image that's generated from one of these open models. What I'd say is we should be seeing more and more image generation models that are more culturally responsive. But right now, definitely be careful.


    Amanda Bickerstaff
    Let's say you're doing a unit on a native american tribe and you think that you're going to use an image generator to create something that would be culturally relevant to, I would suggest very strongly that is not a good use case right now, especially if you are not from that native american culture and or don't have an expert that you could talk to. So just be careful that you're not using these image generators to actually unintendingly put bias forth. That was never possible before because weren't making these images ever before. Historical and scientific accuracy. If you're trying to create a new diagram or a new model for, let's say I'm a science teacher, mitosis, it'll probably not be accurate, so just be really careful.


    Amanda Bickerstaff
    We kind of love it, though, because we would use it as a misconception or like, here are students, here's this new model we made of mitosis and have them identify it's wrong and actually have them find the errors as a way. We would use something like that to flip it. But we would definitely not be relying on these for anything that you want to be fully accurate because you also do not have a fine level of control. You're not really able to fully change anything to your specifications at this stage with the image generator, even with a mid journey. So you don't want to use it for something that needs to be deeply accurate for students, anything that's super specific or super complex. We'll show some examples of what goes wrong there. Like something where you want to have.


    Amanda Bickerstaff
    One of our funny things is that we did a promptathon. Students thought they could create a brochure with an image generator, and they actually included one because it was so bad. It was like hysterical. It was like completely and totally. None of the words made any sense, but they showed it as a limitation. So like anything that you want to have be who specific, it won't work as well. Then to Brandon's point about copyright. Oh, man, right now, you know, I talked about Luma, which is a. Sorry, Suno, which is a generation for music. They're being sued right now by music labels. We have had, Adobe Firefly has actually paid for all of their stock photos that were used to train.


    Amanda Bickerstaff
    So they're the only model right now that has taken an ethical approach to actually working with photographers and other creatives to ensure that their image generation is going to be ethical. But Dolly, ideogram, others, not only are they taking copyrighted and creative material that's protected by license, but they're also allowing. Ideogram lets you create images of Batman and others that are going to be copyright protected. So you need to be careful on two sides of how you use it, especially with students, but also the fact that if you really want to think about ethical, I would use Firefly so that you actually can feel confident that the creators are being prioritized and paid for their work.


    Amanda Bickerstaff
    Another thing to consider is if you're a creative yourself and you're thinking that you're going to be able to copyright an image that has been created by generative AI, you cannot at this stage, there are no copyright protections for anything artificial intelligence creates. So just be careful. Also, if you're a creative thinking that you can do this as well. Guard Wells, like we, you know, you can create images that can be, we'll actually put these together. Deepfakes and explicit images can be very easy to create. Many of the tools have, like, these tools that we shared have protections in place. Right, Mandy? Like, they have, like, you know, kids can't smoke and things of that nature, but they're easily easy to fool. And then also there are other applications that have been designed to get around that. Absolutely.


    Amanda Bickerstaff
    And so, like the idea of deepfakes, there are like issues with students, young people actually creating non consensual, deep fake nudes of people. There are image generators like, we wouldn't like. You couldn't use Firefly to do that, but you could use one of these nudification apps that exist. So actually, Amanda, or Amanda, if you don't mind putting into the chat the deepfake guide that we have. Just so you have it as well. So, yeah, so, yeah, so, exactly. There are others that are going to be, man, that are going to have none of those protections in place. Like firefly has big protections and Gemini has big protections because of. Because of. They're actually trying to be ethical. And sometimes even too much like Ben Franklin really isn't copyright protected.


    Amanda Bickerstaff
    I mean, but what I will say, though, is that on the other end, you have these open playing fields that can be really difficult. So we'd always suggest if students are using these tools to teach about the ethical components as well. So we're gonna. We're gonna shift into an example of bias. Mandy, you wanna talk a little bit about this?


    Mandy DePriest
    Yeah. And if Dan's handy with that deepfake link, I'm trying to pull it up and I'm, if you don't mind, drop it.


    Amanda Bickerstaff
    That'd be awesome.


    Mandy DePriest
    Yes. But so just as kind of an illustration of bias and how it's present in these images, I went to four different tools, and these are the prompts that I put in. This is the only thing. And I asked for a talented doctor, an ambitious CEO, a successful lawyer, and a great teacher. And if you notice, they're all fairly similar to the point where three of them are even in the same pose. We all have white males. I see a woman in the background, kind of blurry there. I think this doctor probably moonlights as a teacher, because this looks like the same guy to me. Got the same haircut, the same facial hair.


    Mandy DePriest
    To be fair, Adobe Firefly did give me, like Amanda said, it gives you four, and two of the four were women, but this guy was in here, so I wanted to put them up. So this is just an example of how its default is to go to what it has seen, the most of which for most of these models has been white male between the ages of 18 and 35. So when I'm wanting to make sure that the content I'm creating is diverse, I have to specify a specific race or age or, you know, physicality that I want to make sure is included, which feels a little squicky itself. So I am looking forward to the day where we are able to update training data and get these models to be a little more inclusive.


    Amanda Bickerstaff
    Yeah. And then you were talking. I remember were trying to, like, make diversity, like, a bigger part of the presentations. We do that having, like, body diversity. Right. You said you're struggling with even having people not be, like, the thinnest people, that thinnest model people. Right.


    Mandy DePriest
    I specifically started asking for people to be chubby because then I would get, like, a normal looking person instead of an emaciated skeletal wave.


    Amanda Bickerstaff
    Yeah. All the isms, unfortunately. So there are some ways to combat them. But until the models themselves get better at being able to be fine tuned, be trained on more diverse data sets, we're going to see this happen. Okay, next slide. Let's keep rolling. We're going to actually get into how to do this. So you're going to do some prompting with us in a minute. And so here is our Shakespeare bot, which we love. It's super cute. And so when we think about writing image prompts, the thing to consider, unless you're using midjourney, that when you are writing a prompt in Dolly or canva magic or an ideogram, what's actually happening is you're writing a text prompt that is being optimized by a generative AI model in the background, and then it is going to create an image. So it is.


    Amanda Bickerstaff
    Why sometimes what you'll see is you'll put something in that's fairly simple. And then if you see the output, which Mandy will show you'll actually see that the prompt it comes back with is very long and detailed, is because there's an image, you're prompting the text generator to prompt the image generator. So it's actually a form of agents where two AI's are talking to each other. So what we suggest is using the same skills that you use for good prompting, which are going to be very specific context concrete, you know, like specific adjectives and concrete nouns, and you always have to include a subject, and we're actually show that. So in this case, this was a robot dressed as Shakespeare writing with a quill pen. Very simple.


    Amanda Bickerstaff
    And then what we would suggest is you see how it goes, and then you can then work with the system to remix it and or to create a new version with more detail once you see what it does. So, Amanda, do you want to talk about the next slide?


    Mandy DePriest
    Yeah. So some things to include are the format that you want, if you want it to be a photograph, if you want it to be an illustration, a painting, a comic book, whatever the style. If you want a particular artistic, you can say expressionist style or impressionism or black and white drawing. Black and white photograph. Definitely the subject. What you want it to include, the mood, if you want it to be spooky and dark, or if you want it to be bright and happy light. And this is interesting. You put your designer hat on and think about how the light is going to be. If you want warm light, if it should be sunset or golden hour is a phrase that I see a lot incorporated into image prompts.


    Mandy DePriest
    If you want something to be spotlighted or in highly contrasted light, you can specify that if you want a particular color scheme. And this is great for if you're making images custom to your school colors, like anything mascot related, you could say, I want this in blue and gold or whatever. Your colors are an artistic style. And reference, again, you could say in the style of Pablo Picasso, or in the style of Monet or Warhol or whomever. And also framing, again, putting that designer hat on, thinking about how you would compose the shot, like, if you were a photographer. Like, I want a close up of this person, or I want a wide angle shot, or, you know, any of those terms. And so you can say looking from the top down.


    Mandy DePriest
    And so all of these are very specific details that will help you get the picture that you want in one or two shots. Because I found that the longer I talk to an image generator, the more confused it gets. Like, if I have to keep asking, it will change this and change this. It starts to really go off the rails, and I get some strange stuff. And so the more I can include at the front, so that I'm more likely to get that in just one or two shots, the better off I am.


    Amanda Bickerstaff
    Great.


    Mandy DePriest
    And so here's an example. Yeah, sorry, Amanda, I didn't want to.


    Amanda Bickerstaff
    Say, go for it. Keep going.


    Mandy DePriest
    Your words. So this is an example. So I'm specifying the format and the style. I want a black and white close up photograph. I talked about the framing. I want it to be close up. I want it to be a chessboard in the middle of a game in the style of a Nikon D six high shutter speed camera, which is something I saw on a website. I do not know anything about photography, so I assume that is a very good camera using dramatic light and eight k. So things like four k and eight k just specify the level of detail that you want the picture to have. So when I want something to be photorealistic, a lot of times I'll say eight k, because that really takes it in to look like a real photo.


    Mandy DePriest
    And so here's another prompt where we're talking about this time. I want a comic book style illustration of a head of broccoli dressed as a superhero flying straight at the viewer. See, I told the position that I want them to be in and how it should look to the viewer. And then I mentioned the colors that I want. Red, yellow, blue and green. So maybe I'm in an elementary setting and we're talking about healthy foods, and I want something like this for, like, classroom decoration or something. This would be a good way to get it. And then our third example, specifying an artistic style and an artist. I want a surrealist painting in the style of Salvador Dolly. Giant spoons and forks. This is what I want using evening light. So I specified that. And I got a little weird with the spoon here.


    Mandy DePriest
    I don't know what's happening to him, but it is a surrealist painting, so I'm happy with it.


    Amanda Bickerstaff
    Absolutely. And so we're getting lots of really good questions. So I think that I'm just gonna. We're just gonna. Let's start with just a couple of questions that we have. And so I would say for, like, the ways in which you get the best is, like, being as specific as possible about what you want and doing it in a way that's structured. Meaning, like, if you're just word salading it and it's just like, 18, like, words that are put together, it actually can get confused. And so the more specific you're going to be, the better. And so if you look at this example of different tools, same prompt, the way that Mandy's framed this is, it's a close up, so we're identifying what the view is. It's a photograph, so it's photorealistic of a calico cat relaxing on a wing. Back armchair, very specific.


    Amanda Bickerstaff
    So if you just put armchair, it would. Or chair, it could be a million different things. Near a window on a rainy day, the light is moody and dim and eight k. First of all, the Internet is for cats. We love to see cats here, but what you're going to see is that it's not very long, but it's super thoughtful and intentional in the way that it's framed and phrased. And actually, the more that you use these tools, the better you're going to get at knowing the nuance. And so, for those of you that are just getting started, we would suggest one of these pieces. But if you are an expert and you really want to get down to characters that are the same, and you want hex codes of colors, and you want certain types of styles.


    Amanda Bickerstaff
    Mid journey, which is going to require you to learn how to prompt. Mid journey is going to better for you. Just think about as you get started, as you become more an expert on this or already are, you always going to want to use tools that are designed specifically for you. So, for example, Adobe Firefly is kind of the introduction to the Adobe suite of generative models, but they have other models that allow you to expand an image, allow you to insert people to change backgrounds. And so this is really, think of this as like getting you started. And then the tools themselves, as you build expertise will be there for you when you're ready to do those hyper specific applications.


    Amanda Bickerstaff
    And I just want to point out that one of the things that's really cool about generative AI we see in these four image generators is the creativity. So you're more likely going to get this kind of wild creativity than very like that. Images that always look the same. So you have to be ready for a little bit of trial and error because it is going to be the same way that generative AI for text is always going to be creative. These tools have been designed to be creative as well. Okay. Also, do you want to talk about this, Mandy? We know that we love words. It's not that great at words yet.


    Mandy DePriest
    Well, it's not a language model the same way that chat, DPT and all of those are. It's working with pixels and images, and so text can be hard for it to represent. I've given you some examples, and underneath each image is the text that I asked it to say. And then what I tended to get so things will blur, like in the big summer playing. I don't know, they've got some blurry letters there. They've left some letters out. They tend to repeat words like, I have doctor smartypants in that middle image twice. It was supposed to say all the brilliant poets. We got poders instead doctor smartypants on there twice. And then the last one was not a specific list of texts at all. I just kind of gave it free reign.


    Mandy DePriest
    I just wanted you to list the parts of a plant cell and it just made up some letters. It does not know the parts of a plant cell. And it's been a minute since I was in biology class, but I would reckon that's not a scientifically accurate image of a plant cell itself. So yeah, text is hard. I do have better luck with text when I put it in quotation marks, especially on like dolly and GPT 40 when I'm using it. But right now, ideogram is the one that I go to whenever text is a component of the issue of the image that I want, and it generally does pretty well.


    Amanda Bickerstaff
    Yeah, and I would say, like, go for a phrase over like a sentence or like a treatise. I like how time is flying. It's going pretty fast there. But what we see here is that there are lots of really good conversations in the chat part of this. Remember that this is very nascent, so there are a lot of tools that are just getting good at certain things. So be ready for these updates to happen. But if someone asks about putting a person within an image, there are actually headshot creators where you can have, like, application that's been designed that you can upload an image of yourself and it'll create professional headshots. There are ones like, Dolly has released a couple things that are really interesting.


    Amanda Bickerstaff
    You can go to a part of an image and actually change it that's been created, and you can start to characterize where you have characters, like not characterization in terms of, like, linguistic, but you can actually have a characterization where characters are in the same panel, it's going forward, and the same images. So there are these really cool ways that we're going to start to see more interesting use cases, although at this stage, they're all a little bit unreliable. Okay, so now we're going to get started. So we'd love actually to do this with you guys. So we'd love for you to put in the chat a couple of things. We're going to ask you to give us what you want us to create an image of together. What are we going to use first? Which model are we going to use?


    Mandy DePriest
    Should we do copilot, since that's the one I've cited on this?


    Amanda Bickerstaff
    Yeah, let's do copilot. We're going to show you a couple of what this looks like. And if you've already done this before, hang out with us for a little bit. If you have tips and tricks, that would be great. We just want to make sure that we show everybody how to actually use this. So if you want to put some ideas in the chat. Nice. Please be nice. But there are, like, if anybody has an idea or anything that you want to do, like, is there a character, a place? Mount Rushmore. I love it. Curtis. An adventure book for him. Oh, we love this. We actually see that one of the use cases is students creating their own books, but also parents creating storybooks, which is really awesome. So you've got a tuxedo cat reading a book about cats.


    Amanda Bickerstaff
    I love how it's always like, cats and dogs. So let's include Mount Rushmore somewhere. Mandy, let's do a dog. But let's do it. Let's have the dog read a book about cats. Let's make this about unity. Absolutely. And we'll definitely show some of these extra cool things about being able to specify it within the model as well. I love how it's always cats, everybody. Why is it always cats? What about a zebra?


    Mandy DePriest
    The Internet is for cats, like you said. So copilot will make you be logged in to create an image. If you are just using copilot without logging in, you'll have to create an account. But it does give you these four. And, hey, how cute. That's adorable.


    Amanda Bickerstaff
    It is pretty. You want to open. You want to open them up a little bit. They're a little hard to see. And this is where you actually can see, like, a couple things that you'll be able to do is that you can, like, how cute is that? But what you can do is you can actually now start to, like, change up the style on the bottom. You could, it says add a squirrel peeking from behind the book. I don't know why I thought that would be good. That's not something I'm interested in, but you can start to change the style here.


    Mandy DePriest
    And you also have these options to.


    Amanda Bickerstaff
    Like, pixelate it exactly.


    Mandy DePriest
    Turn it into watercolor, whatever stylistic device you could desire, you don't have to wait.


    Amanda Bickerstaff
    And so this is Fang. Fang's point that sometimes you can do in the style of Pixar, in the style of, that are actually designed as part of it. Oh, my gosh. How funny is that? Oh, my gosh. How cute. It says the dog most Rushmore on the book with this little professional cat on it. But this is an example. And then maybe we can show the other features, too. You can share, you can edit, you can download it. And we always love, if you're thinking about transparency with your students and staff, we always like to put the same way that we put underneath every image where it was designed. We like to put it there the same way that you would cite a piece here. And so I think that these are really great examples. Why don't we show.


    Amanda Bickerstaff
    Would you want to show ideram now or I? So let's do another one. And thanks to everybody, like, actually, like, people are starting. I know we are. We're coming up on time, but let's actually look at another one. I saw a good one. So Madria, I probably said your name wrong. I'm sorry. Her school is named after Kingfisher, so maybe we can have it be. I don't know if we can do a color. A color, like a coloring book approach to a kingfish fisher bird swimming. We can maybe try that out. And so idogram, you can either set this private or not. And what you're going to notice as Mandy's doing this is that the auto is. Remember we talked about? It's actually prompting the image generator. That's what you do with auto.


    Amanda Bickerstaff
    If you have it off, you're going to use it like you would magourney, where it's very specific, but you can change the ratios, you can change the style. We'll actually look at that again. It's going to take a little bit of time. So maybe while we're doing this, can you actually just open up that again? Let's open up another, like, piece where we can actually look at. I want to just show those components what you can do. Maybe if you can go to your gallery, Mandy. Okay, so what we're going to do is look at what.


    Mandy DePriest
    Whoa. Don't judge me.


    Amanda Bickerstaff
    Oh, this is Mandy on the beach. She's just at the beach. Let's go to Mandy on the beach. That one, I love it so much. How cute.


    Mandy DePriest
    This one, I asked for a teacher on summer vacation.


    Amanda Bickerstaff
    Love it. And then what you see is the prompt, but the magic prompt is what it's actually prompting, the image generator. But what you can do is you can retry it, you can remix it. If you go to remix really quickly. Do you mind opening that up? This is where you can change the image weight if you want to look like it, but this is where you can change the style of photo illustration, et cetera. So just know that not only do you create the image, but then you can also adapt it as well. And so for the same styling for Curtis and Fang, there are a couple that you can. I would say that right now, the goal for Dol e is it to have the ability to have consistent characters. So you can definitely try with that.


    Amanda Bickerstaff
    But it's going to be a little bit hard to do that. So just know that, like, there are this, remember, early stage, we're at the very earliest stages of this work, but here you go. We have our, it's not quite a coloring book version.


    Mandy DePriest
    It probably should have specified, I want a black and white outline.


    Amanda Bickerstaff
    Yeah. So what we could do is we can go back and we can do that and we'll do that in the background as we go forward. But she's actually going to now go back to the image itself. So if you just click on the one we like right there, she's one or the teacher. And then this one, we'll do this one so we can do like a coloring book. And then what we're going to do is go to remix. And that's where we can change it to black and white and try that out.


    Mandy DePriest
    Let's say black and white outline, maybe.


    Amanda Bickerstaff
    In a coloring book. Style or line art. That's a good idea, Nathan.


    Mandy DePriest
    Okay, I'm going to put that the end here.


    Amanda Bickerstaff
    Okay. So this is just an example. It's going to make you wait because of that. The image generation actually is quite expensive for the companies themselves. These are not only expensive in terms of computing costs, but also environmental costs. Just something to consider. But while we're doing this, do we want to go back and maybe. So this is an example of how we're providing feedback. But do you want to show one more? Is there any Firefly or gemini or is there anything you want to show specific to this?


    Mandy DePriest
    Here are the CEO's firefly. So we have this guy here, but we also had two women and two people of color. So that was great. I do want to point out on Firefly, you have this little toolbar on the side where it will kind of set up the prompt for you if you want to upload a reference image or if you want to set a certain style, you know, or if you want a particular effect like collage or something, you can specify the color and tone the lighting. So if you're not, like, super designery minded yourself, Firefly is great to have those tools for you to help you set that. Let's see if it'll run again. Maybe it won't. I have a lot going on right now, but this is a great toolbar. If you're using Firefly as well as their ethical images.


    Mandy DePriest
    I mean, Firefly is coming out ahead in a lot of these.


    Amanda Bickerstaff
    Yeah, I mean, I think we work with Adobe, so Adobe was one of our partners for our summit that we did. And we really like any ethical. And so the fact that they paid for all their images makes us feel really more comfortable. And as Rick dropped into the chat, Rick, it's been, first of all, Rick, thank you so much for sharing all your knowledge and everyone else that has, but they've really done a lot of work on ethics here. But do we want to go back to idea and just kind of see if that worked a little bit better. It's still thinking, no, still didn't quite work. And so what we suggest is like, it's a little bit of trial and error sometimes. Maybe it just won't work or we need to have a better approach.


    Amanda Bickerstaff
    And they're also, again, we're going to try one more. Let's try one more. And then if. Let's start brand new.


    Mandy DePriest
    Let's just start brand new. Yeah.


    Amanda Bickerstaff
    Yeah. Start a brand new. And we also just like starting brand new if it's not quite working. Instead of trying to continue to do this. But what we want to do is we come up on time, we're going to come back and just spend the last bit of time talking about classroom application. We always want to challenge you. Everyone here is to try is just absolutely try. So the person that put in this idea of kindergarten pictures for worksheets, we'd love you to try that out with one of these image generators because the prompting really is more about you trying and seeing what happens and then giving it feedback. Because the power of these tools are often not. It gets it right the first time. In fact, very rarely does it get it right the first time.


    Amanda Bickerstaff
    We want you to get comfortable with giving feedback, trying, seeing what works, seeing what doesn't, because that's going to really help out. And so what we're going to do is we're going to shift back to the presentation. We'll come back and see if we got it one more time. Let's actually talk a bit more about the actual use cases for the classroom. And so what we have here is like lots and lots of examples. We have things like illustrating a story. So thank you, Fangfang, for actually putting it into the chat. A storybook creator, I can tell you right now, the cool thing that's happened is two things. We were in a training very early on in October of our work where instructional leader actually brought in a picture book that a fourth grader had actually built.


    Amanda Bickerstaff
    And so a little bit of like, you know, he wrote it with chats and then illustrated it. And it came in, it had a title, a cover page, and it said the authors were the young Mandev and chach VT. And what it had was it was like a full heat. It bound it. And he was so excited about it, which was really cool. And then what we saw is that, like, teachers are actually doing this as well. They're bringing in. It's an important story. They want to make it meaningful for their students. So they're creating this storybook that they bring in. And so this idea of illustrating stories to make them more fun and interesting or even having students take a very kind of common story that we all heard and then actually have it create that.


    Amanda Bickerstaff
    Have the students themselves with your support, if they're younger or if they are older and they can get permission creating the illustrations themselves. What would be so cool is give every kid the same story and have them come up with their own illustrations to show the opportunities, but also the creativity component of it. Like, in my mind, I want it to be dark, but maybe in Mandy's eye, it's going to be puns all the way down, and I'm going to play it very straight, and Manny's going to have karate kids everywhere. But that idea of, like, just in this, our two people, we have such a different style that is equally important to show. So I think that's a great way. Visual definitions of vocabulary words is really great, especially for students that are non native english speakers.


    Amanda Bickerstaff
    Or if you're a language teacher, it's a hard word, vocabulary, maybe even that you really see students struggle with. You can actually have that in there as well. You can have it illustrate classroom rules and routines. That's kind of fun. We've seen some cool things where you actually have it help you write. Instead of having, like, everyone's going to write a story about x, the students actually create their own prompts. They use their own prompts to create an image and then write about it. Or I have an image of a week that becomes that paragraph of a week that is connected. It could be like cats in space playing parcheesi, you know, something like that. The kids have to create a dynamic story about how they got there or about setting or something else. And then for historical image figures, be careful around that.


    Amanda Bickerstaff
    Not all will be able to use, but it can be kind of fun to do something a little bit more fun with kids about your Shakespeare bot instead of Shakespeare, for example, I can help with scientific processes. Again, I think it's more for misconceptions. You can show one that's a little bit off that you've created, and then they have to pick out what's wrong. And then also, of course, the easiest low hanging fruit here, especially if you're using canva magic, is slide content. Actually creating. Mandy did here manga illustration that demonstrates the concept of gravity. How fun is that? Versus your more traditional, like an apple falling. And so I think that these are just like the opportunity that, like, we can start seeing.


    Amanda Bickerstaff
    That's really valuable because what we see now is like this opportunity to start not just having better content or more interesting content, but also actually having students kind of either model this where in the case of student access right now, chest vt 3.5 and four can permission for 13 to 17 year olds, they would have access to the image generators, but they have access already in meta. AI has access. TikTok has avatars now. So even if they're not using it with you in an instructional manner, they're probably creating it in Snapchat, TikTok meta. So just know that the way that we're thinking about using it is probably quite different than the way that students are thinking about using it.


    Amanda Bickerstaff
    And so just think about that as well about that AI literacy component of making sure that you kind of have students that know that these images could be fake, they could be created, or they're going to be something that they need to be transparent about using. So we're going to keep rolling and just what, like, I think our ultimate goal is to get to a place where like, you start trying these out. And so whether that's through language learning, like Kathy just pointed out, can be really great of actually labeling this itself to an actual visual writing prompt. And of course I'm in a hotel and someone is knocking on my door. Hold on 1 second, I'm. Yeah, I still here. Hi everyone. I'm in San Diego. But what we have here is like, these are the opportunities that we want to do.


    Amanda Bickerstaff
    So I think that at this stage, if we go to the next slide, is we would just love you to think about these opportunities. Illustrations book covers I love Mandy. You did a beautiful job in the book cover that is creepy and a very big brothery and so, but just get really creative and that's always our challenge to everybody here, is that we ultimately want to go off and use these tools and actually have them be available to your practice and then start to get comfortable with them. So I think we'll wrap up here because we're a couple minutes over, but what we'd love you to do is just like keep reaching out to us. There's a couple questions. I think this one, we're getting a lot of questions about the slides itself.


    Amanda Bickerstaff
    So what were going to suggest is I think we'll pull together a one pager from like this kind of key component that we can share. I love it when I'm sure Mandy's very excited that I just gave her more work, but we don't tend to share our slides, but we can share a one pager with some use cases, best practices and the information about the actual tools. It does mean that the recording will be probably shared a little bit later than normally, but we really are happy to share that if you want to have it for your own practice or for your colleagues. And if you're watching it later, you should have access to this as well on our website. And we just say, thank you so much for hanging out with us. And like, absolutely be willing to hang out.


    Amanda Bickerstaff
    We have our free course. We have I'm online, Mandy's online, our prompt library. But we just hope that this is inspiration to get you started. Also, again, they have some really great people that are already here. Reach out, connect to each other, and just, we hope that this is helpful to your practice. I mean, Mandy, do you want to give any final words?


    Mandy DePriest
    No. It's kind of like you've been hitting on. The best way to learn these tools is to play with them. There is so much out there, we've really barely scratched the surface today in our short time. So choose, like, one or two tools and really dive in and talk to other people who are using them. And that's, I think, the best way to really sink your teeth into this type of content.


    Amanda Bickerstaff
    Absolutely. Thank you guys so much. Good morning. Good night. Go to bed. If you're on summer vacation. Enjoy the beach if you can, go to it. But we just appreciate you all being a part of our community and hope you have a wonderful day. Thanks, everybody.