From the course: Marketing with Anthropic Claude: Turning LinkedIn Comments Into Content Strategies
Unlock social media insights: Harness AI to analyze LinkedIn comments with Anthropic Claude
From the course: Marketing with Anthropic Claude: Turning LinkedIn Comments Into Content Strategies
Unlock social media insights: Harness AI to analyze LinkedIn comments with Anthropic Claude
- Now we've known for a long time that we can use machine learning and we can train models to go and do tasks. And everybody assumes that in order to tap into machine learning, you have to go and train your own model. And that used to be true because what machine learning did was it created models that tended to be very brittle and narrow. You could create a model and then you couldn't really reuse it because the moment you change some little bit of the formatting of the data or some slight, you know, aspect of the task, suddenly the machine learning model would break down, so you'd train a model and then you would try to go use it. You'd realize it doesn't really work that well. You'd try to go collect more data, you'd try to you know, cleanse your data further, do all this work and so it took a huge amount of work 'cause you had to go and really cleanse your data. You had to really spend a lot of time on getting your data just right. Now we're going to still see that we have to care about our data but not to the same degree that we did before. The second piece of it was you had to spend a lot of time training a model in order to go and perform some task. And the model tended to be specific to the task you were going to work on. In fact, your whole system had to be built around that model to perform and do that task that you want it to do. So it took a ton of work you said, "Okay, I want to predict course titles for my next course." Man, is that tough, right? You've got to go and train a model that has all this information on course titles that are relevant to what you're doing, and then build an entire software system around it in order to get you know, predictions of course titles out of it. Really hard, really expensive. That's why you have teams of data scientists and AI experts at companies. Because for you as an average, you know user, you couldn't do that. You had to go get them to build it for you and then they handed it off and you got to use it because there was no reusable way to do that, right? You had to start from scratch every time, they had to build a custom one for you. Now, what's happened with Generative AI is it is a model that is incredibly reusable. It is totally sort of like this generic platform for doing all kinds of incredible machine learning tasks. And so we can go and tap into what it is already learned and then we can use that model to do all these tasks without having to retrain it for our task with our data necessarily. Now what's amazing is we can retrain it now in a prompt, and we're going to look at this in this module of how we go and teach it new tricks. But anybody can teach it new tricks without having to resort to fine tuning the model where we're actually changing the weights. Now people use fine tuning in all kinds of ways, I'm starting to see it used more and more loosely basically to mean in context learning with a prompt. But in general, if you hear somebody saying they're going to train a new model, you know, they're going to go and fine tune the model weights, there's a good chance you don't actually need to do that. That you just need to use in context learning in a big prompt in order to teach the model new tricks. And what's going to be amazing is you're going to see that you are capable of doing all these tasks. You don't have to go and train a custom model to perform something that would've required a custom model before. In fact, you just go and take whatever it is, large language model. Now you want to go use the frontier models, right? These are the big models that are out there. The the most advanced models from the key players like Anthropic, and OpenAI, and Meta. You want to go use those big you know, top tier models to do this because those are the ones that are most flexible and most able to adapt to new tasks. But you don't have to do anything complex to use them, you just simply go and prompt them. Now you can go and use them as they were trained and they're immensely capable but you can also go and teach them totally new things. And the way you teach them new things is with examples in the prompt. We go and we drill in on that part of the prompt that is examples and we use those examples to teach it to do new things that previously would've required training a custom model from scratch. So we use it and give it examples and the beautiful thing is the examples don't even have to be that, you know, carefully formatted and you know, that was so much work before. We had to really think about carefully, you know, picking apart the data and cleansing it and getting it in the perfect format. And we don't have to do that now because these models can pay attention to the data and figure out important pieces they need. Now, if we give it the wrong data and we teach it the wrong thing, of course it's still going to be a problem. But we don't have to worry as much about all these little details about formatting and other stuff. So what we're going to do is we're first, we're going to look at how we can reuse existing AI capabilities in these models, existing things that it's learned to do on classic tasks like classification, prediction, clustering. Then what we're going to do is we're going to learn how we can train the model in a prompt and realize that you really are training it. We are doing machine learning through in context learning in a prompt and this is state of the art. It is incredibly sophisticated, incredibly powerful. You're not giving up much to do this. In many cases, you're actually going and going to beat out state of the art if you'd gone and spent a vast fortune to train your own model and there's many examples of this. For example, Bloomberg went and trained their own custom model on their own data. I think it cost $10 million and out of the box GPT-4 was better at those same tasks. So you could go and do really, really well with what's just there. You just have to know how to tap into these things and you have to recognize that you are a data scientist and AI person now that can tap into those capabilities. And now it's probably the time to go and learn about how the ways you evaluate the outputs, which is going to be beyond the context of this class or how you use these different building blocks. But I'm going to get you started showing you how to tap into these models as a reusable model that can be used across a variety of different task types and then how you can teach it new tasks. How you can teach it from your data and all through prompt engineering.
Contents
- Unlock social media insights: Harness AI to analyze LinkedIn comments with Anthropic Claude5m 50s
- (Locked)Transform comments into content: Classify LinkedIn feedback with Anthropic Claude prompts9m 14s
- (Locked)Build powerful content strategies: Use Anthropic Claude for clustering analysis of LinkedIn commenters4m 50s
- (Locked)Discover trending topics: Predict new content areas with Anthropic Claude based on LinkedIn comments8m 12s
- (Locked)Craft content that resonates: Create personalized recommendations using Anthropic Claude and LinkedIn comments3m 41s