Image and Video-Rich Digital Signage with Cloudinary and Intuiface

In this episode of Cloudinary DevJams, we host Seb Meunier – Customer Success Manager at ⁠Intuiface⁠! He will share how their interactive signage platform uses ⁠Cloudinary⁠ to deliver images and videos across various projects, thanks to ⁠a .NET plugin he developed⁠.

Further demonstrating how the platforms interlink, Seb will walk through how their system can use many other well-known platforms. This includes ⁠Airtable⁠ and ⁠OpenAI’s DALL-E⁠ for AI image generation, in conjunction with Cloudinary. Seb plans to show these integrations through various operations, such as dictating prompts, uploading images to Cloudinary, and displaying the images on large screens.

This episode shows how you can use Cloudinary as a single source of truth for all your digital projects, including signage for shared spaces such as museums and galleries. So, check it out and see how it is possible, thanks to the great work Seb has done for Intuiface!

Sam Brace: [00:00:00] Welcome to DevJams. This is where we talk with developers who are doing interesting, inspiring, innovative things when it comes to images and videos inside of their projects, and many of the times using Cloudinary for that image and video delivery and overall management. My name is Sam Brace, and I’m the Senior Director of Customer Education and Community at Cloudinary.

And in this episode, we’re gonna be talking with Seb who works for Intuiface. And Intuiface is doing amazing things when it comes to being able to have lots of ways to have displays and bring those into places like museums and galleries and just places that you’re walking through to be able to show off lots of different functionality that is gonna help you to better understand the various spaces that you happen to be in.

And interestingly enough, they happen to be utilizing Cloudinary in some of those [00:01:00] projects. So we’re gonna be walking that today with Seb. Joining me for this episode, and also many other episodes of this overall DevJams program is Jen Brissman. She is a technical curriculum engineer here at Cloudinary, and we are always so happy to have her expertise and intelligence inside of these efforts.

So Jen, welcome to the program.

Jen Brissman: Hey Sam, happy to be here.

Sam Brace: Happy to have you here. So tell me why are you particularly excited to talk with Seb and also the overall efforts around Intuit phase today?

Jen Brissman: I’m excited to talk to Seb personally because he’s been working with Cloudinary for I believe, eight years, maybe more at this point.

And I think that’s incredible to have that sort of tenure and to be able to go back and see how it all started back in the day. But I’m also excited because what they’re doing at Intuiface is so cool. Just connecting audiences to physical spaces and I think we need a lot more of that, in these times.

And I’m really excited to hear [00:02:00] more about what he built for these conferences and what they built in Dubai. So really excited to have him on.

Sam Brace: Absolutely, and I think that’s a kind of a neat thing is that Intuiface seems to be doing a lot and there’s, it is one of those areas where when you see things that you know that you’ve worked on, it’s always exciting.

And I think Intuiface seems to be in a lot of places from what I gather. So I think that’ll be something, as you’re pointing out, being able to bring it into spaces, the audiences, and all those different things. It’ll be, interesting to see what we uncover here with Seb today. Before we do get bring our guest on, I do wanna make sure that it’s clear that there have been many episodes that we have done with our overall DevJams efforts, and you can easily find those at cloudinary.com/podcasts as you can see on the screen here, we have it where we have plenty of content that we have been doing, where we’re walking with other developers that have done projects with Cloudinary, [00:03:00] or frankly, just showing off images and videos that they’re working with in really cool ways, and being able to help you understand how you can do many of these same steps in your next projects.

So make sure to check that out at cloudinary.com/podcasts. And similarly, Seb and other community members are inside of our Cloudinary community, so make sure that you are actively going inside of here to continue some of the conversations that we start in this episode. And that’s gonna be at community.cloudinary.com.

Now, one thing that we’re gonna point out in this overall episode as well is that the way that we found Seb, the great work that they’re doing, is inside of this blog post. And you can easily find this at community.intuiface.com, and they have lots of details about this event that they did in Dubai, the COP28 event for their client, Microsoft, a cool client, and there’s lots of different things that they’re able to do for this.

But inside of here, we started to see that Cloudinary is being actively used inside of this overall [00:04:00] post. And this is something that we are very excited to learn more about. So, a nice segue to be able to bring on what we’re gonna have as an amazing guest with Seb. So Seb, welcome to the program.

Seb Meunier: Hello. Hi Sam. Hi Jen. Thanks for having me today. it’s a pleasure for after eight years of working with Cloudinary, finally meet someone at Cloudinary.

Sam Brace: I completely agree. So Seb, tell us a little bit about yourself. obviously we’ve told people your name, Seb, we told you people that you work in Intuitface, but there’s a lot more detail there. So, who are you?

Seb Meunier: Sure. So I’m Seb Meunier, Sebastien Meunier. I’ve been working with Intuitface, formerly known as Intuilab for 17 years. This is actually my first job. The first company I’ve worked for after getting my engineering degree. I’m a computer software engineer, specializing in vision, image, multimedia, so back in the days. [00:05:00] And with Intuiface, we’ll talk more about the company and the platform, I’m sure. But basically I went from being a developer in the product team to helping customers managing projects, and now being the customer success manager, mainly for North and South America. Do not trust the accent.

I am based in the US.

Sam Brace: this is great, and gimme a little bit of context here because I feel like me and Jen have mentioned a few different things about what Intuiface is and what Intuiface does, but obviously we don’t work there, and you probably know it way better than us, especially since you’ve been there for quite a while.

So what is Intuiface?

Seb Meunier: Sure. So first of all, I apologize. This is a DevJam for devs and Interface is a no-code platform. So it is a no-code platform for creative, interactive digital signage, mainly. So basically any kind of signage where you want the users to interact with it, whether it’s a touch screen, it’s a sensor, it’s an RFID, it’s scanning a QR code and controlling [00:06:00] something with your mobile phone.

So any physical places where you want to engage with the visitors. That being said, this platform, you can create any kind of apps for small tablets to large touch screens, to tables, to video walls. We we don’t really care about the form factor. We work on seven different, operating systems and you can find us in a lot of places, and by us I mean the projects being built by our customers.

It can be museums, it can be point of sales, retail, real estate, hospitality, anywhere where you want a screen to not be just a screen playing a video, but be more than that. That’s the gist of it.

Jen Brissman: So how did you get into this line of work? You said this is your first job, but were you always interested in interacting physically with a machine or a screen, or was this something that you learned about for the first time with Intuitface?

Seb Meunier: This is because of Tom Cruise, Minority [00:07:00] Report. 1997, ’98, I think. And I don’t know all of you who remember that movie, but Tom Cruise was basically controlling images with his fingers, having some kind of funky gloves on it with gestures, 3-D gestures. And this was my project in 2005, working with $15 webcams.

Sport gloves with LEDs, 20 cents each, and doing that before iPhone one. Before the Wii. Before Kinect. So 2005, 2006, that was the projects I was working on, and I’ve always wanted to become like a video game developer, something like that. The industry brought me to meet someone from Intuilab in a user experience, user interaction course. I did my internship at IntuiLab before it became Intuiface, and I was hired and that was the beginning of the journey. [00:08:00]

Jen Brissman: Awesome. yeah, I guess the rest is history. I remember the Nintendo Wii and just thinking wow, we’re officially in the future, we can interact with the screen with our gestures and holding the remote.

And now we’ve just come so far and I’m sure there’s so much to come in the future in this space as well.

Seb Meunier: The Apple Vision Pro just was released and now we get ads everywhere about this. So spatial design, I guess, is the next thing. We were just trying to do that 15 years ago.

Jen Brissman: Yeah. Yeah.

Sam Brace: So think So let’s talk. No, go. Go ahead. Go ahead.

Jen Brissman: Yeah, I was just gonna say, so the reason we have you here is because you built something really cool with Intuiface, and we wanna get into that, talk about that a little bit. So could we talk about, at a high level, what you built and then do a bit of a demo?

Seb Meunier: Sure. And I guess this all comes from the use case, and why we’ve been searched for a tool or a platform like Cloudinary. Again, our platform enables people to create interactive applications for public places. [00:09:00] One of the use cases from one of our customers eight years ago was we want to create a cool photo booth where people can take a photo with a webcam and then upload it to the social media.

Facebook, Twitter, for example. And so,, there were a couple of issues. The first being this is a public facing device. I’m not gonna ask somebody to enter his login and password of Facebook on the kiosk itself. That’s not gonna happen. So we needed to find a way to, from the kiosk, use the webcam, take the snapshot, and that we do already upload that image somewhere, so that the user could retrieve that image on his personal phone, personal device where he’s logged in, he has this credential that’s private domain and then he can do the post on his phone.

So, okay, where can we upload an image to? Google… found Cloudinary. Eight years ago. And we probably found a couple of options. It’s [00:10:00] rough, it’s tough to remember what happened eight years ago, but I would say because I’m the one who wrote the integration, I do remember it being super easy, super straightforward, and the documentation being just one of the best.

And the other thing I can say is what I wrote eight years ago still works today and is being used today.

Sam Brace: So it’s stable. That’s pretty cool. Yeah. That’s really cool. And so and that’s what’s neat is like because you had this project, and of course now that you’ve developed this integration where it’s now easy for people to take images that have been uploaded from Cloudinary and get those into Intuiface, as you said, no code.

So it’s easy for them to be able to do drag and drop and move them around and place them in different places of display. They now have their images stored in a single space, but then they can easily manage those and deliver those as needed inside of their Intuiface projects. So that’s…

Seb Meunier: Yes. So, eight years ago,

basically what we were doing was to upload that image from the Intuiface experience, the [00:11:00] snapshot being taken by the webcam, and from Cloudinary, getting back a public URL. Eight years ago, we were sending that to a Facebook kind of thing to make the post. Another use case we’ve had is we’ve been also working a lot with Airtable, as a kind of third party CMS.

And a funny thing about Airtable, and that was true eight years ago, and it is still true today, in their API, you cannot upload a photo. You have to give a public URL to Airtable so that a photo can be added in the Airtable base. So again, we upload to Cloudinary, get the URL, give it to Airtable to store these images.

And that’s a use case we’ve seen more recently. Earlier in 2023, with the, appearance of chat, GPT and DALL-E. And, we talk about generative AI a little bit more today, I believe. A customer for a show event, wanted to have the visitors generate a lot of phone [00:12:00] images, through DALL-E and Chat GPT, and then display an art gallery of all these images.

So we need to store the images and to transfer them from one device to another. Cloudinary, Airtable, same chain, same process, and zero code developed because we already had this plugin road back in the days.

Sam Brace: Incredible, So I think we’ve talked about this plugin a little bit, and obviously eight years ago is, eight years ago.

But let’s take a look at it. Let’s let’s see, like exactly how you were able to get these two systems to be able to connect with each other.

Seb Meunier: Sure.

Sam Brace: So on my screen, I should be showing you a help center document that has been developed for Intuiface, basically breaking down the steps that someone needs to do to be able to upload images to Cloudinary from the Intuiface platform. So yeah, let’s, dive into this.

Seb Meunier: Yeah. So the first thing is on the Intuiface website, we obviously have a support section with more than 500 articles. [00:13:00] If you search for Cloudinary, you will find this one single article. If you go through that, it’ll also have the link to GitHub, and I will jump to that in a minute. All our plugin source code is public, so anytime we develop a plugin, what we call an Interface Asset, we try to make the code public because anybody could build such a plugin.

We did it for Cloudinary because we had the need to, but there are a lot of our Intuiface users that build their own plugins. So now I’m talking to the devs here today. We are a no-code platform, but you can code. And you can create all these plugins, whether it’s in .NET for a player for Windows, or in TypeScript for the player for the other platforms that we have.

So, scrolling down in the page, we also have a link to this Facebook example I was talking about, and that’s the one I will show in Intuiface Composer, our editing tool.

Sam Brace: Okay.

Seb Meunier: I guess, Sam, you’re wanting me to have a look at the code a little bit?

Sam Brace: Yeah, let’s dive into it just so that people can [00:14:00] understand it.

One thing I think I’ll you’ll point out is that the code is actually very clean, simple, understandable. So it’s not like you have to dive into too many, “oh, it’s gonna be hard.” No, there’s a lot of simplicity here, but this is wonderful how you guys did this.

Seb Meunier: Yeah. it is, or at least we, tried to keep it as simple as possible. And, the first thing I would say again is if you want to use Cloudinary within Intuiface, you don’t need to go in that page. This is for those who want to get the code, have a look at it, maybe modify it if they want to. Add more features, maybe, from Cloudinary? But that’s, again, this is more for reference than for usage.

So that code here basically explains how it works. We use the Cloudinary, SDK in .NET. And basically all you need to use Cloudinary in Intuiface are these three pieces of information.: Your Cloud Name, your API Key, and your API Secret from, sorry for the old [00:15:00] screenshot. That’s what we have in GitHub.

Sam Brace: Eight years ago, it’s fine.

Jen Brissman: It’s vintage. Vintage.

Seb Meunier: That was eight years ago. So you need these three values and then in composer, our authoring tool, you enter these three values and you’re good to go. So you can just get the build from this GitHub repository. You don’t have to rebuild it again, today, to be able to use it.

So let me jump into the code so I can show you a little bit how it works, and you will really see it is simple. First thing is we are in C#, .NET in Visual Studio. And to be able to build this integration, we relied on Cloudinary .NET Library. That’s what made it easy eight years ago to build this integration.

We do have a couple of properties and I will explain what it means in, Intuiface lingo, what these properties are and how they are used in the platform. But you will find the Cloud Name, the API [00:16:00] Key and the API Secret I just mentioned. We have a constructor here, which doesn’t do much besides trying to update these credentials.

And we need the three of them be to be able to do something. When we have the three of them, then we create this Cloudinary object. That’s what comes from the.NET library from Cloudinary. So, far one useful line of code. The second thing we have is one action, one method, UploadImage, which uses a file path as a parameter.

Again, we check if the credentials are okay. If they’re not, please verify your credentials. If they are, we check what the path looks like. We do have a little thing that’s really an Intuiface stuff to remove this weird path format. And then we create these upload parameters, if I remember well. This is also an object coming from your library.

There’s one parameter, the file, which is the file path, [00:17:00] and we start an asynchronous task to make sure that we do not block the UI thread, the graphic thread. In this asynchronous method, the only thing we do is called the upload action from the Cloudinary object. So that’s the second useful line in this code.

We create the object from Cloudinary’s library, we call the Upload action. That’s it. The result is what we care about. That is what will contain the URL as the result of that upload. And that’s the one we’ll use to send to Facebook to send to Airtable. So that’s really this URL from the result that matters to us.

That’s it. That’s all the code we have.

Sam Brace: And this is clean because it like what you’re ultimately doing is you’re bringing through original images with no transformations, no optimizations from Cloudinary, but that’s fine for what it is because, this is where you’re doing this for. You’re not putting this stuff on the web, you’re putting this on to high [00:18:00] interactive signage, so you’re trying to make sure that it, is at its highest resolution.

It is in its original state. So it makes sense why when you’re doing this, you’re just basically trying to get what’s the biggest file version that you have in Cloudinary. Suck it in so that way you can work with it at Intuiface, from what I can see.

Seb Meunier: That’s correct. And that’s where we did publish the source code in case somebody else wanted to use more than just 1% of what Cloudinary offers, in terms of features like resizing, image processing, cropping, round edges.

I don’t even remember all the features you have. And it’s almost shameful to say that no, we only just use you to as a cloud storage.

That’s fine, It shows that’s what we did.

Yeah, exactly, This is, and that’s, it makes tons of sense of why, based on what, how your customers would end up using it.

So this is, good. This is really, good. So to, to explain how this is used, what I would like to do now is to [00:19:00] show what, how this upload image action. These three properties, how do we actually use them in interface without writing any code? So I’m gonna switch to Interface Composer, and by the way, this post to Facebook, that’s the sample you can download from our website.

So that’s exactly the same. Maybe just made a few texts a bit bigger for today. That’s it. And so among the interface assets here, if you search for Cloudinary, you will find this Cloudinary Image Uploader. That’s the plugin. That’s the, build of this GitHub repository as a library with we call Interface Assets.

Once you add it to your Intuiface project, you will find three properties on the side: Cloud Name, API Key, API Secret. These are cropped so you cannot use them. So that’s the properties you need to fill in to be able to use this uploader. The next thing is [00:20:00] we will have some buttons at some point in this demo, and the button can call an action.

So all the dynamic aspect of an Intuiface project, what we call an Intuiface experience, is based on properties, triggers, and actions. You don’t write code but you can program things to happen. So when I press and release this upload button, I have my button “Is Released” trigger. And on the action side I have my Cloudinary Intuiface Asset, and I have this one single upload image action, which we defined in the C# code.

It has one single parameter, the file path, the image URI. So that’s what is seen by the Intuiface users without having to know what happens in the C# code behind the scene.

Sam Brace: Incredible.

Seb Meunier: I might just play that demo just to, see live how it works. Yeah. Yeah. so [00:21:00] I’m running on Play for Windows. I’m gonna hit play here. I did not put the webcam on because I’m already using the webcam, so that would become fixed. So just the basic image here. I’m using my touchscreen. I will just show my amazing drawing skills here. All right. Little smiley. Taking a screenshot. So let’s say this is the file on my local drive, which I want to upload to Cloudinary to post on my Facebook account.

Can see here the local file path. That’s what I retrieved from Intuiface. When I call this, I click this upload button. That’s the path I’m giving to Cloudinary. Cloudinary replies pretty quickly with this part here. Res.cloudinary.com, sebmeunier, that’s my account, that’s my image. And what we did in that sample, we added that to a specific Facebook URL.

So now if I grab my phone, as a user in front of a kiosk, remember this is public facing.

Sam Brace: Right.

Seb Meunier: I [00:22:00] can scan this code, and on my phone, trying to show that here…

Sam Brace: There it is.

Seb Meunier: This is my Facebook account and I’m ready to publish this image on my private Facebook account. I didn’t have to give any credentials to the kiosk.

I just retrieved the image on the phone thanks to CloudInnery being the proxy to host that image for us.

Sam Brace: This is cool, and I can see why someone from like a, signage standpoint would really want this. Because, let’s say that you go to see some beautiful landmark and you’re trying to get a picture of your family all together.

You’re like, great, how do I do that? And be able to share that quickly to social media? This is something that is probably a common thing that people want to be able to do is share their images on social media. And then because you’ve been able to link Intuiface of Cloudinary together, you’re able to now point this to different places. That’s, really good. And it definitely answers something that I think a common signage user would wanna do. So [00:23:00] this is awesome.

Seb Meunier: Yeah. So this was really the use case eight years ago. And again, nowadays this module is available and can be used in any ways when you need to upload an image, you have locally on that public facing device, kiosk, table, wall.

And do something with it online.

Jen Brissman: Do you have any moderation concerns or, I, suppose it’s in public. Nobody’s doing anything inappropriate, but, or just say someone were to like, hold up some sort of symbol that you deemed inappropriate. Has that ever been something that you’ve thought of or, have had to combat?

Seb Meunier: Yes. Yes it is. And actually if we talk about the, the second use case, which is more uploading an image to a kind of a collection, which was then displayed on the public facing gallery, then there is this moderation proxy or moderation step in between that can be done depending on where we store that image.[00:24:00]

So in, in another example, we have, once we have uploaded image to Cloudinary, then we upload it to Airtable, to a base, in which we can have something as simple as this. We add a second column, is this validated, yes or no? Then when we upload the image to Airtable, we send an email to the moderators.

They know if they have something new here, and they can just check it in Airtable. And so when we display that list of images, we only display the ones which have been validated. So that’s a pretty common case. And that’s actually one which is being used this week in a new project we are working on with the lobby of a residence.

Jen Brissman: Yeah, I can imagine that if it’s in the lobby, you would wanna make sure that all the images are appropriate. And now, even Cloudinary offers AI capabilities to Moderate, images and videos, any assets that come in. And so I just, I wondered how you were doing that. So you’re doing it through Airtable and so what you send it off to, what is this, Airtable’s moderation, or how long does that all [00:25:00] take?

Seb Meunier: No. That would be more on the end customer, Moderation.

Jen Brissman: Oh, I see.

Seb Meunier: So human, moderation, for now. And again, that’s not how we do it, generally speaking. That’s one example of how it can be done.

Jen Brissman: How you could do it. Cool.

Seb Meunier: Yes. In the end, what is also important to see is that, if there is a rest, API to integrate with the system, you don’t even need to write the code I was showing for Cloudinary. We do have a tool, which is called the API Explorer, which can enable to integrate these web services without having to write any code. We’ve been talking about AI a little bit. I do have an example of a scenario that I could show you if we have five minutes, five to ten minutes.

Sam Brace: We have plenty of time. you, show everything. Absolutely.

Seb Meunier: Alright, so we’re gonna go into a blank project, so you know I’ll really start this from scratch.

Sam Brace: Okay.

Seb Meunier: The, one of the scenarios that we’ve had recently was we want to use AI to [00:26:00] generate some cool images. And to display them in a public gallery.

So to do that, we need a couple of components. The first one will be, we need a prompt to send to the AI, we need something like DALL-E to generate an image. We need to upload the image somewhere — that’s where Cloudinary comes in. And we need something to list these images. That’s where we could use Cloudinary.

We use Airtable, for these examples.

Sam Brace: Okay.

Seb Meunier: So all these modules here, basically are all these plugins of Composer. So if I search for OpenAI, I do have my ChatGPT. I should not need that here today. I’m gonna use Whisper. That’s the speech to text components. So we’re gonna use this one. I will definitely need Cloudinary and for DALL-E… .

DALL-E is one which has a very simple REST-based API. So that’s where we don’t have it off the shelf, at least right now. We will actually [00:27:00] have it next week to make things even simpler for our users. But we do have a, this tool, the API Explorer. So I’m gonna show you how this one works. If I go to the OpenAI reference documentation, this is how you create an image using DALL-E 3, at the moment here. So it’s a simple post, that has an API key in the header, and a couple of properties in the body of that post request. Okay, so let’s say I’m not a developer, I don’t understand what that means. The only thing I need to do is to get the curl sentence and copy that. That’s all I need. I’m gonna go into the API Explorer and in here,

paste this curl. The only thing I will do is I need to add my credential key in [00:28:00] here,or else this is not gonna like it, and hit enter. I believe there’s one parameter I need to modify maybe, but what’s behind the scenes here is our API Explorer, which is an AI-based engine. There’s one parameter wrong here because the documentation is not really up to date, and this is analyzing that curl request, building the host, the endpoint, all the parameters of his header parameters or body parameters, and it is trying to see, okay, that’s what the API from DALL-E is replying. I don’t know if this is JSON or XML, I don’t need to know. This is towards non-developers.

Sam Brace: Right.

Seb Meunier: So this revised prompt, that’s the way this was, as a cute baby sea otter, that was my default prompt. I’m gonna keep that and say this is my DALL-E Create Image component.

And I will have this here in the scene.[00:29:00]

So let’s try to, for now, this is empty because I need to send a request to actually get the result. We did say we want a prompt, so I’m gonna add a text input. So we could either type our prompt using the virtual keyboard, or because we have speech to text, I’m actually just gonna use Whisper to dictate my prompt.

Sam Brace: Okay.

Seb Meunier: So I will add the button here. My button will do two things, and that’s how we use these plugins without writing the code, without having to do the API calls. On Whisper, when I actually, when I press the button, I want to start the recording, and I’m adding a second one. When I release the button, I will stop the recording.

So push to talk, kind of scenario.

Sam Brace: Okay.

Seb Meunier: So Whisper, stop recording and transcribe. These are the [00:30:00] actions. From Whisper, when this is processed by the API, we get a trigger, the transcription has been received, I can use that value, and I will just feed my text input. In case they don’t like my accent, I can fix it.

I’ve done that before and I know that sometimes OpenAI doesn’t like French. So we have our first step here. The next step is to actually create the image.

So again, we’re gonna use a, button to call the second step, DALL-E. We have a credential key, we have a model, we have the size. I don’t want a cute baby sea otter this time. That’s not what I want. I want to use as a prompt the content of my text input.

Sam Brace: Right.

Seb Meunier: [00:31:00] And I’m going fast here because I know you can replay that.

And we have a lot of webinars and videos explaining how this works. This is called a binding to do some kind of data flow, if you will.

Sam Brace: Okay.

Seb Meunier: So we get that image, that image is gonna be created, and it’ll be displayed here. We have the prompt, which is empty at the moment, and the image is gonna be somewhere here.

Our last step. I’m using buttons here, but we could obviously automate that and had some indicators, graphics, that’s why I’m the dev and not the graphic designer. Upload to Cloudinary. That’s the step which matters to us here. When we get that image locally on the device, like the response from DALL-E, I want to call the Cloudinary Image Uploader the this Upload Image action, which we saw earlier. And the file path,

I’m actually gonna grab it from the image asset I have on my scene. [00:32:00]

That’s it.

Sam Brace: Okay.

Seb Meunier: The last thing for the video, for the demo, to explain exactly what’s going on. I will add a second text input here in which I will use the Cloudinary trigger. The image has been uploaded, just to make sure it worked, and we are gonna fill in that text.

Set text with the result coming from our trigger, Image uploaded, that’s actually the URL that you guys are giving us back.

Sam Brace: Okay.

Seb Meunier: So then I can use this URL to post on Facebook, to post to Airtable to whatever I want. Before I hit play. You see these fields are empty, so this is not gonna work. I need to go to my Cloudinary account and copy these values from my account.

That’s what you need to do with your Cloudinary account to make sure that it goes to your cloud and not to mine. So this is what I loved about [00:33:00] Cloudinary. It’s super simple. You have three copy buttons. And the last one is, no, that’s not the one.

Sam Brace: There you go. Yep. Yeah.

Seb Meunier: All right. And I need to do same thing for Whisper. Open AI key.

And I hope I didn’t forget anything. So it’s demo mode. Let see.

Sam Brace: Alright, let’s see.

Seb Meunier: Imagine a joint logo between Intuiface and Cloudinary.

Sounds good.

Sam Brace: It’s pretty good. It knew the words. I like that.

Seb Meunier: Create image. That’s where you want to show something like it’s a spinning wheel, that, because open AI is not the fastest one to apply with an image, but here is the process. Here is the logo.

Sam Brace: Incredible.

Seb Meunier: Okay, why not? And then I can just upload this to Cloudinary.

I get the link and if I open that in the browser, now we have a public image, [00:34:00] which is our new merge logo.

Sam Brace: Incredible.

Seb Meunier: There you go. That was, I think I was in the five minutes, more or less.

Sam Brace: Yeah, absolutely. Absolutely. Walk me through how Airtable is used in this process. Or maybe I misunderstood.

Seb Meunier: Yes, that’s because I didn’t go through that part yet.

Sam Brace: Okay.

Seb Meunier: But it’s a good point. So let’s assume, I want now to upload this image into this particular table.

Sam Brace: Okay.

Seb Meunier: Because I want this list to be displayed maybe on a second screen, maybe on another scene in our experience. So I need two things. I need, one, to be able to display this list of images. Two, to be able to post a new record, a new record here, with the link Cloudinary gives me, gave me back.

Sam Brace: Okay.

Seb Meunier: And that’s where Cloudinary and Airtable for us were good tools to work with because of the quality of the documentation. So if I want to display this list of pictures, I can go into: [00:35:00] Help, API documentation. I. I have my list, my table here, and I want to list the records.

So if I had to write some code, I would need to make that request. That’s the kind of response I would get, and I would need to parse it and process it and pick the fill I want and write some JavaScript to handle that. But we have the API Explorer.

Sam Brace: Ah.

Seb Meunier: So all I need to do here is actually to copy these curl requests, go back into Composer and use again the API Explorer. So let’s bring it in. I’m gonna copy my request. Again, I do need my Airtable API key. I do have it on my second screen here. I

send. If you remember that big piece of JSON that we were getting. This is what an Intuiface user will [00:36:00] see.

Sam Brace: Okay.

Seb Meunier: And sorry for the test pictures.

Sam Brace: Okay. Yeah.

Seb Meunier: I’m gonna just remove the max records because I want more than three recalls, and I can say among these big JSON, I’m only interested into the main photo URL.

That’s all I care about.

Sam Brace: Okay.

Seb Meunier: So I can select that and say, here, this is my uploaded photos from Airtable. Good. Now I can display that list. That’s a previous logo, joint logo, I tried to make earlier.

Sam Brace: Mhmm.

Seb Meunier: And so that’s the list to which we want to add our new picture. I’m gonna put this a little bit smaller here.

There we go.

Lemme just fix this one. I went a bit fast here. Photo URL. We have our images.

Sam Brace: Wonderful.

Seb Meunier: So that’s to display the [00:37:00] list of pictures we have uploaded to Airtable. The step we are missing is to send the Cloudinary URL to Airtable. So going back to our documentation here, we don’t want to list the records. We want to create the records, and we want to create a single record.

That’s what the curl looks like. Again, if you had to do that in JavaScript, a good developer can do it, in a few minutes probably. We just copy and paste this in the API Explorer again and go with the same process. Do you want me to go through that or does that explain the I idea and the process?

Sam Brace: Yeah, absolutely. This is great.

Seb Meunier: Okay. Okay, so let’s go again in the API. Explorer, we’re gonna … I still need my credential key.

So API token goes here. Now [00:38:00] this one here, I believe, yeah, this is the one that can be messed up a little bit because there’s a Boolean and we only send strings. So there is one limitation with the API Explorer.

Sam Brace: Okay.

Seb Meunier: And I need to remove that, so let me just, that’s where sometimes you do need to know what the current request is.

Sam Brace: Yeah.

Seb Meunier: A little bit. And to let’s update it before using it in API Explorer. No, let me go back to here. That’s our request. Okay. we will just remove this one field here. So this one doesn’t bother us. Go back in here. Now it misses the credential key, which I will add here.

Sam Brace: Okay.

Seb Meunier: And [00:39:00] what am I missing?

Photo for the URL should be

good.

Maybe I did not give the right permission on my base for that token. That’s probably what happened because usually I do read-only demos, and now with the new tokens of the table, they have strict permissions. So I guess that’s what’s going on. But you get the idea, you would have the same kind of method. You’ve just passed the URL coming back from Cloudinary here as the parameter, as this field, and that’s it. Then you, it’ll add the image to the Airtable base, and then you can refresh what is being displayed on the wall, and you can keep adding this media with or without the validation step, depending if you want it or not.

With the images generated by [00:40:00] AI, there was no need for validation because OpenAI already generates…

Jen Brissman: Appropriate images.

Seb Meunier: Appropriate stuff… appropriate images, for selfies and photo booth, definitely we would put some moderation steps.

Jen Brissman: Wow.

Sam Brace: I can see where, when you were talking about the project, where you’re talking about we wanna show images, for like a, let’s say like a neighborhood wall for an apartment building or something like that, then it makes sense why you’d wanna see, like a collage or a gallery, like what you’re able to show with this.

And the fact that then they go to this repository, which basically that’s what Airtable acting as, and then you’re able to keep displaying those dynamically through Intuiface. This is slick, but the nice thing that I like about this is that you’ve taken all this work that you’ve done from developing code, working with various APIs, such as Cloudinary’s, but you’ve made this super easy for your users to do. They don’t have to necessarily know a lot about any of the underlying elements that are here other than maybe API [00:41:00] keys, API secrets. It’s all done through just simple actions and buttons. So this is really cool because of that.

Seb Meunier: Yes, we’ve tried to keep everything, whether it’s an image asset or text input, something visual on the scene, or a plugin like these Interface Assets, relying on the same three concepts: properties, triggers, and actions. So even when you add a new plugin, we have this X-ray panel, which shows you the list of all these PTAs, these properties, triggers, and actions. So it’s super easy, to see what’s available in Read-Write or in Read-Only. What are the actions I can call, and what result I’m getting back?

Because when you write some code, when you write a method in C# or in JavaScript, you have a return value or you can have a return value. When you call an action here, you don’t really have a return value. That’s why we need the triggers, which will be events, in common coding [00:42:00] languages.

Jen Brissman: Yeah, I love looking at this composer because, every programmer knows these concepts like an on-click event listener, and a button, and what that all happens, what happens behind the scenes.

But a regular user or maybe someone who shows up to a kiosk or museum, or even somebody who’s building their own, using the plugins and using this composer, they might know what a button does. We all know when click on a button, something happens, but you don’t need to understand what happens behind the scenes because it’s all just in Composer, laid out, this graphical user interface way that, is really, inclusive of, every type of technical level.

But it’s cool if you do know what happens conceptually behind the scenes, because I think then you can dig in the way that we’ve watched you demo, how you can dig in and, remove certain parameters or anything, really customize it more.

Seb Meunier: Yes.

Jen Brissman: But it seems like, would you say, Seb, that your users of this Composer are mostly not technical, or is it really a mix?[00:43:00]

Seb Meunier: 99% percent are non-technical.

Jen Brissman: Wow.

Seb Meunier: I would say, I like to see Compose, and I’m obviously biased because I’ve been using the tool since we created ourselves about 10 years ago, 2012 was the year we released the version one of Composer. I like to see it as Excel or Photoshop. And I’m completely making up these numbers, but 80% of the users of Excel and Photoshop use less than 10% of the features. Probably.

Sam Brace: Yeah, I agree with that.

Seb Meunier: Same thing with Intuiface. We have a lot of users that will not even use Interface Assets. They will just go on a folder, grab a bunch of pictures, lay them out on the scene, hit play. They can build a couple of scenes with a couple of buttons to navigate between these scenes, and put that on the kiosk and that’s in the new museum, which is [00:44:00] way better than a looping video because people can decide what content they want to consume. And that’s probably most of our customers. And most of our projects might be small to medium museums with couple of screens, maybe couple of dozen screens.

Like the MoPOP Museum in Seattle, the Museum of Pop Culture. We have 50+ screens running Intuiface, and I had no idea they were customers until I visited the museum myself. And I kinda recognized the navigation between the objects and… that looks familiar. And I had no idea. And now I know the guy who did that. He doesn’t know how to write a line of code. There you go.

Jen Brissman: There you go. Yeah.

Sam Brace: I think the cool thing that you’ve done as well is like the fact that you are pushing things to Cloudinary so that basically in Intuiface and Cloudinary are being able to work together on a side of things. Because it is in Cloudinary, if someone said, I wanna use that same image on my website or my mobile app, or something that’s gonna be like that, it’s there and it already has a URL [00:45:00] for it. So it’s easy enough for them to then weave it into a content management system or to deliver that to a developer that’s managing the web presence. So it’s all right there in library showing.

Seb Meunier: You’re right. It is. And honestly, right now we’ve been using Airtable as this proxy to list the media files, probably because I didn’t even look at the other APIs you had, because I’m guessing I can query my media library and see, this merge logo, which we just uploaded together.

It’s here. So I can probably list it and get the list and use just Cloudinary without Airtable in here, most likely.

Sam Brace: Yes.

Seb Meunier: But definitely the image is here and, that’s where there are many more things we could probably do. Like all the advanced editing you offer, all these AI-based features that you have, which I don’t even know about, to be honest.

Sam Brace: Exactly. and I think why it’s still cool that you did show the Airtable side of things is because Airtable is a big factor when it comes to a lot of the ways that developers are working. So it is to show that’s a [00:46:00] realistic scenario where you’re trying to use something where you already probably highly invested into it, like Airtable, but now you’re trying to find way to work with the imagery, and you’re like, how do I make this all connect?

So it makes total sense why you showed what you showed. But yes, of course there, there’s ways to, if you wanted to have one vendor , you could just use Cloudinary for some of this only, too. But that’s not for this episode. Your example is excellent. This is good. And I, think that goes back to the point of, you have a single source of truth for now your images and videos is Cloud-based. So that way if I say I wanna use it for my kiosk and my signage, but I wanna use it for other purposes, you have it and it’s all right there. So this is wonderful.

Seb Meunier: Yes, and it’s really useful for us or for our customers when we have a multi-device setup, like…

Sam Brace: Yep.

Seb Meunier: An entrance little tablet and a large, super large video wall 4K, super high resolution on the backend running during the show, during the event. Or when we have this public device versus private device, we [00:47:00] need to make this connection without being able to do anything on the phone because on the phone they don’t have an Intuiface app running. So that’s another thing also, which is, with our latest player technology, Intuiface player– software player, we can build Intuiface experiences using the same composer tool and run them either in venue, on the player, installed locally on the device, or on the web.

And that’s new. We released that in last September. And that opens in a whole new world of possibilities, because now I can build an experience in Composer, publish it on the web, send that URL, put it in a QR code, and have somebody with an iPhone take a picture and upload that picture to Cloudinary.

Scenarios are just starting to explode with that possibility. And to, loop the circle here, the [00:48:00] project you mentioned the Dubai COP28.

Jen Brissman: Yeah.

Seb Meunier: Was not built by us. it was built by Tossolini Productions, who was one of our customers, creative experts, based in Seattle. And they are the ones

who’ve been pushing the edges of Intuiface for years. Probably one of our first customers. And they are experimenting with VR, AR, AI, all these two letters acronyms, and trying to get the videos or photos from an iPhone, using Intuiface on the iPhone, and do things with that. Right now, Paolo, the guy you have the picture on your screen, he’s building a project for Microsoft again, for trade shows. Like wayfinding on a trade show because Microsoft booths are almost a whole size.

And, so they need wayfinding to know which partner is where on the booth. And he’s just building that on the phone using Intuiface as [00:49:00] well, and thinking about, okay, we can take a selfie and send that to the social media team of Microsoft, so we can show it on the wall. Brainstorming about these new scenarios where the mobile and the larger in-venue screens work together even more than before.

Jen Brissman: Wow. That’s exciting..

Sam Brace: Absolutely.

Jen Brissman: Yeah, it seems like there’s a lot being developed for the future, and this is really just the beginning. So thank you so much for coming onto to DevJams, Seb, and sharing all that you shared with us. I hope that people really are inspired by a lot of the demos that you did. I know, I definitely was.

Seb Meunier: Thank you for, thank you again for having me today. Again, I know this is DevJam. I know Intuiface is a no-code platform. I hope you can see that plugins can be developed, especially with this new player, next-gen platform we have on the web with TypeScript, Interface Assets.

So if you have any questions about the platform, definitely you can reach out to me. I believe you’re probably [00:50:00] gonna show about that. Reach out to us. We have a free version on the website, all the good things. We have community where most of our users are sharing what we build with Intuiface.

So that’s the website. We do have a free trial for a month, and you can also share my direct email address for sure. No problem.

Sam Brace: Absolutely. And I think that’s what’s wonderful about this is that you’re showing that this no-code platform you have is very flexible to extend. Like you can basically say I need to maybe hook up to this or that.

And the fact that you’re able to show how you’re able to work with, pretty API centric things. Cloudinary, Airtable, DALL-E, it’s where if you want it to be able to be used by your no-code users, just go ahead and be able to get it to work with your system, and this is pretty incredible how you even make it that easy for them to create digital content. So Seb, great work by you.

Seb Meunier: It’s taken a few years, but…

Jen Brissman: Yeah, no, I, totally agree too because the way [00:51:00] I think of it, Seb, as you were saying, “oh, this is for developers,” but really developers these days are having to do what you’ve done, which is, I think of you as a connector. You know, imagine a puzzle, but instead of only one way that the puzzle goes together, these pieces can connect in all different ways, and it’s up to the user or the developer or whoever to put these pieces together, the ways that they want, and they can use only, 1% of the capabilities that Intuiface offers or that Cloudinary offers or Excel or, everything that we were talking about. I think even as humans, we use only a small percent of our brains. But really we’re, putting together things in, different unique ways. And you are the enabler because you as the developer and the people listening or watching DevJams as the developer, are thinking like, how do I connect these softwares together so that people can use them easier?

And even people being non-technical people. So there’s, so much for very technical people to listen to this type of conversation and glean from it, for sure. [00:52:00]

Seb Meunier: That’s actually the word I was gonna use. Enabler.

Jen Brissman: Yes.

Seb Meunier: That’s one we use a lot because we want to enable our users to use what they don’t know how to use.

Jen Brissman: Yeah.

Seb Meunier: Without having to acquire these skills that they might miss or they don’t have the resources, we don’t have the time to acquire. Another thing which might be interesting, and it’s, that’s right now that’s happening right now is, some of these plugins, these Interface Assets, like the one with Whisper, that I used in the demo. I’m ashamed, but I didn’t write the code for that. GPT did. Because we have an Intuiface coding assistant as our own GPT, which is released now publicly on the store, on the, I don’t know how they call it, on GPT, but if you search Intuiface in ChatGPT Plus, you can find our coding assistant, still a tech preview because it’s still AI, it’s not perfect, but I was able to have 80% of that code written by GPT and just fix [00:53:00] little last thing. So we also save time now even for the developing part of the plugins thanks to AI and that’s definitely going a big change in ’24.

Sam Brace: Incredible. Incredible. and I have a hunch, Seb, this is probably not the last episode that you’ll be on of us.

I have a hunch that there’s gonna be a lot of things that we’re gonna be able to show how developers can be able to develop plugins, extensions, be able to use your platform, our platform, and other ways to be able to make it easy for all users to be able to do great things with digital content. So, Seb, keep it up. This is great stuff.

Seb Meunier: Yeah. I agree. And, I think we’ve used 10% of Intuiface to use less than 1% of Cloudinary, so there’s room for improvement.

Sam Brace: Always. Always. Thank you again for coming. I appreciate it. Absolutely. So Jen, what’s your big takeaway here? What’s the thing that stood out to you most from what Seb was able to show us and talk to us about today?

Jen Brissman: A big takeaway for me [00:54:00] is that Seb and what he’s built within Intuiface really uses Cloudinary in a small way. However, he’s been able to scale that and continue with what they made eight years ago now.

And I think there’s sort of … Like when I first was using Cloudinary, I was really just using Cloudinary to extract the URL as well. And when I learned all the Cloudinary was capable of, I was almost a little embarrassed, like, “oh man, I really, maybe I missed the point altogether” but that’s not true. You really just need to use given softwares in the ways that they work for you. You just need to get what you’re trying to do to work. And that’s it. You don’t need to use 100% of what every technology offers. And I think that’s sort of a shift in mindset, at least for me. And to hear that Seb is using this at this huge company that’s doing amazing things and it’s okay that he’s not using optimizations or, other things that we know exist at Cloudinary.

And, I think, that’s a cool thing to realize. And anyone listening, if you wanna use Cloudinary in a really simple way, that’s awesome. Do it. Go for it. And you [00:55:00] can probably even use it for free too, because Cloudinary does offer free tiers as well when you’re not using it in so many ways.

Sam Brace: Yeah. You wanna use the tool for what you’re trying to intend it to be used for. If it is for a simple task, then use it simply. And if it needs to be advanced and complex, then use it more complexly and it, and I think that’s exactly what I’m seeing here, is that this, there’s lots of different varying levels as we show on this program, of how to work with images and videos.

And this was meant to accomplish a specific goal that was set out for Intuiface’s customer base. So this is great and I love the fact that it is simple because a lot of the factors of how to do this with your own platforms or with your own types of integration work, Seb was able to demonstrate that very clearly in this episode. So that’s wonderful too. And of course, if you are interested in taking a look at all of this, as we mentioned, intuiface.com, is where [00:56:00] you can check out all the great work that Seb and the team are doing over there. And also, one thing that I’m gonna be doing personally is, every time I walk into a museum and see a kiosk, I’m gonna be like, “hmm…I wonder if Intuiface was behind that now.”

And also notice that the repository that they had for the image uploader that he walked us through, that’s gonna be at his own GitHub. So that’s gonna be github.com/intuiface/cloudinary for a very simple URL. And of course, we’ll have all of the links listed in comments and show notes on all the things that we referenced today, such as the blog posts that guided us to Seb, and also the help article on how to work with the uploaders that they’ve gone ahead and built, too.

Lastly, before we dive away from this, remember we have all of the DevJams episodes everywhere that you typically listen to podcasts, or watch podcasts that includes Spotify, Apple Podcasts, Google Podcasts. It also includes YouTube. It also includes our own training academy, the Cloudinary Academy. So wherever you wanna check [00:57:00] out this content, we’re probably there.

But of course, the main repository for all of this is cloudinary.com/podcasts. So you can hear from guests like Seb and many other amazing developers that are pushing the boundaries of what images and videos can do. And also remember, you can continue those conversations with any of those amazing guests, as well as the millions of developers that are using Cloudinary at community.cloudinary.com.

Now, with that said, any final thoughts, Jen, before we let our guests go on with their day?

Jen Brissman: Final thoughts would be, it just seems like AI is making our lives easier. Everything from using ChatGPT to help Seb write some code, to even moderation. He was mentioning that they have a manual moderation, and I was just thinking, “oh, forever Cloudinary has offered AI moderation” and they can just send that out and, immediately get a moderated response and just code in any parameters they want and, use multiple plugins we have. And I was just thinking, it’s [00:58:00] 2024 now, when we were thinking like, wow, the, Wii was so futuristic. Now we’re here in 2024 and we’re talking about these futuristic things and I’m just excited for even five, ten years from now the things that are gonna be coming out. Really cool conversation with Seb. I hope we see him again too in the future. And, really great episode.

Sam Brace: I agree. I agree. So on behalf of all of Cloudinary, me and Jen, thank you for being part of this episode and also, be sure to check us out again when we’re back because we’ll bring forward another amazing developer who is pushing the boundaries of what’s possible with images and videos inside of their own projects.

So thank you. Take care, and we’ll see you at the next one.