Watch Me Build An AI Parasite SEO System Using Medium
lVdE08ArgGs — Published on YouTube channel Nick Saraev on May 27, 2024, 2:07 PM
Watch VideoSummary
This summary is generated by AI and may contain inaccuracies.
- Nick is going to be building a medium parasite SEO scraper. Nick explains how it works and how to hook it up to WordPress. Nick invites people to watch the video. - We are going to use a scraping service called medium scraper to scrape all the content that is publicly available and then rewrite it using AI. - Speaker A: assuming that we're just rewriting the same publication without care for the keyword, we could run this apify scraper daily and dump all the posts into make.com. - Speaker A tells the audience that they can modify this to pull from a variety of authors and make it run every day. Then they need to make a Google sheet and connect to it. - Speaker A wants to test this flow and wants to see the results in make. The flow is relatively simple but it's still relatively difficult to work with. - We have all the data that we need. Now we want to convert the HTML into raw text and run it through the apeify actor.
Video Description
GET THE BLUEPRINT HERE FOR FREE ‡οΈ
https://leftclicker.gumroad.com/l/xovbb
JOIN MY AUTOMATION COMMUNITY & GET YOUR FIRST CUSTOMER, GUARANTEED π
https://www.skool.com/makerschool/about
SUMMARY ‡οΈ
Watch me build a parasite SEO system using Medium, Make.com, and AI. The system scrapes Medium blog posts, rewrites them with AI, and then creates a Google Doc.
WHAT TO WATCH NEXT πΏ
How I Make $20K/Mo on Upwork with Make: https://youtube.com/watch?v=uuOdz4I9h6E
My $21K/Mo Make.com Proposal System: https://youtube.com/watch?v=UVLeX600irk
Generate Content Automatically With AI: https://youtube.com/watch?v=P2Y_DVW1TSQ
MY TOOLS, SOFTWARE DEALS & GEAR (some of these links give me kickbacksβthank you!)
π INSTANTLY: https://instantly.ai/?via=nick-saraev
π§ SMARTLEAD.AI: https://smartlead.ai/?via=nick-saraev
π§ ANYMAIL FINDER: https://anymailfinder.com/?via=nick
π APOLLO.IO: https://get.apollo.io/bisgh2z5mxc1
π» PHANTOMBUSTER: https://phantombuster.com/?deal=noah60
π PANDADOC: https://pandadoc.partnerlinks.io/ar44yghojibe
π TYPEFORM: https://typeform.cello.so/rM8vRjChpbp
β
CLICKUP: https://clickup.pxf.io/4PQo61
π
MONDAY.COM: https://try.monday.com/1ty9wtpsara2
π NOTION: https://affiliate.notion.so/3viwitl53eg7
π€ APIFY: https://www.apify.com/?fpr=98rff
π οΈ MAKE: https://www.make.com/en/register?pc=nicksaraev
π GOHIGHLEVEL: https://www.gohighlevel.com/30-day-trial?fp_ref=nicksaraev
π RIZE: https://rize.io/?via=LEFTCLICKAI (use promo code NICK)
π WEBFLOW: https://try.webflow.com/e31xtgbyscm8
π CARRD: https://try.carrd.co/myjz1yxp
π¬ REPLY: https://get.reply.io/yszpkkqzkb8f
π¨ MISSIVE: https://missiveapp.com/?ref_id=E3BEE459EB71
π PDF.CO: https://pdf.ai/?via=nick
π₯ FIREFLIES.AI: https://fireflies.ai/?fpr=nick33
π DATAFORSEO: https://dataforseo.com/?aff=178012
πΌοΈ BANNERBEAR: https://www.bannerbear.com/?via=nick
π£οΈ VAPI.AI: https://vapi.ai/?aff=nicksaraev
π€ BOTPRESS: https://try.botpress.com/ygwdv3dcwetq
π€ CLOSE: https://refer.close.com/r3ec5kps99cs
π¬ MANYCHAT: https://manychat.partnerlinks.io/sxbxj12s1hcz
π οΈ SOFTR: https://softrplatformsgmbh.partnerlinks.io/gf1xliozt7tm
π SITEGROUND: https://www.siteground.com/index.htm?afcode=ac0191f0a28399bc5ae396903640aea1
β±οΈ TOGGL: https://toggl.com/?via=nick
π JOTFORM: https://link.jotform.com/nicksaraev-Dsl1CkHo1C
π FATHOM: https://usefathom.com/ref/YOHMXL
π AMAZON: https://kit.co/nicksaraev/longform-automation-content-youtube-kit
π DROPCONTACT: https://www.dropcontact.com/?kfl_ln=leftclick
πΈ GEAR KIT: https://link.nicksaraev.com/kit
π© UPWORK https://link.nicksaraev.com/upwork
π TODOIST: https://get.todoist.io/62mhvgid6gh3
π§βπΌ CONVERTKIT: https://partners.convertkit.com/lhq98iqntgjh
FOLLOW ME
βπ» My content writing agency: https://1secondcopy.com
π¦Ύ My automation agency: https://leftclick.ai
ποΈ My Twitter/X: https://twitter.com/nicksaraev
π€ My blog (followed by the founder of HubSpot!): https://nicksaraev.com
WHY ME?
If this is your first watchβhi, Iβm Nick! TLDR: I spent five years building automated businesses with Make.com (most notably 1SecondCopy, a content company that hit 7 figures). Today a lot of people talk about automation, but Iβve noticed that very few have practical, real world success making money with it. So this channel is me chiming in and showing you what *real* systems that make *real* revenue look like!
Hopefully I can help you improve your business, and in doing so, the rest of your life :-)
Please like, subscribe, and leave me a comment if you have a specific request! Thanks.
Transcription
This video transcription is generated by AI and may contain inaccuracies.
Hey everyone. Nick here. And in this video I'm going to be building a medium parasite SEO scraper. This is essentially going to run an entire parasite campaign for you using medium as the data source. It's then going to process the articles, rewrite them using AI, and then in our case, it's going to publish them on just a Google sheet for simple project management access. But I'll also show you guys how to hook this up to WordPress so you can basically create like your own automatic content mill if you would like. I'm fully aware that this is a very dangerous topic to choose, especially now with Google changing so many of its guidelines. But I've just had so many people ask me for how to do this, but I thought I'd just drop the knowledge. Anyway, if you guys want to do this for yourselves, if you guys are curious about how parasite scrapers and parasite SEO campaigns work, this is the video for you. Stay tuned and let's get into it. Okay, first things first. Thanks so much to everybody that's joined my community. I was not expecting to hit 150 members in like 72 hours. I've capped this at 400. So no more people than 400 can enter this community simply because I want it to remain very high value. I want it to remain consistently valuable and useful to everybody that's in it. I'm increasing the price by $10 every 40 people that join. So the price that it was three days ago was 28. I think now it's up to 58 or maybe $68. The reason I'm doing that is just to represent the additional value added to the community and also logistically, how much time and energy it takes me and maybe other people to manage it. But if you want to get in, I encourage you guys to, you know, give that a quick look through. And again, I want to thank everybody that's joined so far. It's been super great having y'all. Without further ado, let's get into how to build this medium parasite SEO campaign. First thing we need to understand is how medium works. Medium is a blog service. Essentially, it's like Facebook mixed with a WordPress blog. You have all the same functionality of most other social media platforms. So you have the ability to like a post, you have the ability to comment on a post. If you click through, you'll be able to obviously read the blog post. And one of the really valuable things about medium is it essentially allows you to like, you can post on medium and then you can also just post on your own blog and sort of cross post the two. And then because medium is a social network, medium will advertise the blog post for you. So that's sort of the value for a lot of these medium bloggers. I used to be a medium blogger as well. Not to toot my own horn, but I think I once upon a time got 2000 followers. It's pretty cool and I've just found it to be like a fantastic source for blog posts and parasite SEO campaigns. I built out one that's worked very well, although obviously this may be a little bit controversial because it's parasite and you're essentially rephrasing somebody's content to rank on social media. Sorry, rank on Google. But let's be real here. Search engines are just a game at this point and if you can game the game then you're probably in a pretty good position to make a bunch of money on the Internet. So we're going to be scraping all these posts and then we're going to be rewriting them using AI and then we're going to be posting them. And I'm not going to do the whole posting thing, I'm just going to show you how to do it. If you wanted to post on, let's say, WordPress or you wanted to make a Google Doc, that whole step is pretty straightforward. The way that we're going to be doing that is we're going to be scraping using apify. Apify is a web scraping service that essentially allows you to scrape anything in the whole wide world that you could ever possibly want. You could scrape like Facebook, Instagram, LinkedIn, TikTok. Literally any data source that is publicly available probably has an Apify actor, which is the term for a scraping service that somebody has built for it. So the way that you do is first you sign up and they're very generous with their trials. I think I've mentioned this a couple times. Jeez, I feel like I should be on their advertising team at this point. Then you go over to store and then you just type in whatever you want. So Instagram scraper, there are thousands that scrape posts, mentions, reels, LinkedIn, scraper, Google Maps scraper. The one that we're going to be using is going to be medium scraper. So I'm going to type medium scraper in here and then there are two ones that we can pick from. There's this one by ivanvs and then there's this one by Q p A Y R E. I was just using the one by ibnvs. But honestly, just to show you guys that you can use different, you know, like different scrapers. I'm going to use this one. It looks like this one takes an author name and then number of posts. So you could set this up to scrape like a giant list of authors if you wanted to. I'm going to limit the number of posts to, let's just do three. And it looks like we can also scrape the article content as well. So I'm going to click article content. Very straightforward, very simple. It looks like there's a trial and then it's $20 a month. So I'm just going to go ahead with the trial. You imagine that if you wanted to run a parasite SEO campaign, the potential upside of a campaign like this, scraping 1000 posts a month or something, is way higher than $28. So you'd probably be more than happy to spend it. But in our cases, the free usage that we have here, it's going to be more than enough. We're not going to need more than $5 of free usage in order to run this. Anywho, I'm doing this because I just want to test the data. I want to show you guys the data and then I want to strategize and sort of figure out live in front of you, how we're going to build out the rest of this. One thing that a lot of people have mentioned has been really valuable is just hearing my thought processes out loud, not necessarily me just knowing everything about the system that I want to build ahead of time. So that's what I'm doing here. Okay, looks like the scraper worked great. We received a column called author, another column called author URL, column called body published at. And then we have raw article, and raw article is going to contain all the text of the piece that we're going to be attempting to rewrite. We also have the title, which is valuable. Then we have the URL, if, I don't know, we want to store the URL for our purposes. Maybe a human being was looking over the system or something like that. So all these are very positive. You'll notice that the raw body here is sort of crazy. It's just a bunch of HTML. So in my head I'm thinking, okay, well, if I want to rewrite this, I'm going to have to convert the HTML into text, so that's not going to be too big of an issue. And then I'm also thinking, okay, we got a title. So if I want to make a parasite SEO campaign, it's going to be tough to know the keywords, but I'm just assuming that we're just rewriting this without care for the keyword. It's not perfect, but it's not the worst either. And it'll make our lives a lot easier. We could also have AI come up with keywords, of course, although I think that's a little bit beyond the scope of this video. So yeah, this should be everything that we need in order to make the parasite campaign. So what we could do is we could run this apify scraper, I don't know, daily or something like that. Let's say we're following a publication. It's not nix arrive, but it's just some account that spits out 50 articles a day or something. We could run this daily and then basically, let's say we run it at 05:00 a.m. at 05:00 a.m. we would take all of these posts and we would dump them into our make.com scenario. Make.com would then do the rewriting and then we could turn it into a Google Doc or maybe a WordPress post. And then we could add it to a google sheet. And then maybe in this case we want a human in the loop. So the human would be responsible. Like their job essentially would be responsible would be to look through the Google sheet like maybe once a day and then just mark whether or not the thing was posted. That sounds pretty simple. I think that's a pretty solid use case and that's something that a lot of companies could probably start implementing ASAP. So that's going to be what we're going to do. I'm going to assume that we're just going to be scraping the same publication. So I'm not actually going like, I'm just going to be hard coding in Nix or if just because I don't want to scrape somebody's data that's not mine. But yeah, in the future you could modify this to pull from a variety of authors. If you had like 100 authors and you want to do this once a day, you can do that as well. Tons of stuff that you can do. I told you guys that I'd be getting into trouble here for making a parasite campaign. Okay, so I have a scenario up in here called parasite medium SEO system. And let me think, what else do I need? I mentioned a Google sheet, so I'm gonna want to create a Google sheet here. I don't remember what account I'm in, so just give me a moment to. No, I need to make a webhook if I use that, unfortunately. So let's just do add row and then I think I want this one and I'm just going to log back into the service. Unfortunately, it's very finicky and it always logs me out. That's annoying. That is also annoying. That is super annoying. I knew that would work eventually. Okay, great. And then I'm going to make a Google sheet and I'm just going to call this google sheet of medium parasite SEO scraper. Or let's just do medium parasite SEO campaign. And then on this side I'm just going to pick it up because I just want to have it all open here. So I'm just going to connect to it immediately. We want medium parasite SEO campaign. There we go. Sheet name is going to be sheet one. And then that's all. I just want to connect to it. And then the first thing that we need to do is we need to get the apify run. So basically for scheduling this in apify, it's just going to run every day at whatever time we tell it to. And what we need to do is in make.com comma, we need to watch for it to complete. I'm going to go over here and I'm going to type in Apify. There are a bunch of different apify actor modules here, and the one that we want is watch actor runs. This is a trigger module. That's why it doesn't have anything on the left hand side of it. And this is going to be the beginning of our flow. And I'm just dragging over the basically like the trigger scheduler here. This used to be the trigger. This is now the trigger. So we're then going to create a webhook. I'm going to call this finished medium scraper. The can I'm going to use here is this one. And then the actor is going to be. There are two medium scrapers. I think it was QP that was the one that I was using. Right. So we're going to do that. Then I'm going to go to. Ok, and then we have the watch actor run. Now what I'm going to do is I'm going to run this and then I'm going to run this medium scraper again. And the reason why I'm doing this is because I just want to test this flow and I also just want to see the results in make now Apify is a little bit more difficult to work with than maybe some services that you guys could be familiar with. You don't just watch and grab all of the data in that one module that we have set up here, this watch actor runs. We actually need to watch the actor run and then we're going to get a thumbs up basically telling us that it's run. And then we're going to need to make another call to apify to specifically get the data that we want. So it's going to be sort of like a two step thing, but it's still relatively simple. And if you guys have seen my previous apify videos where I scraped everything from Twitter to Instagram, you'll know what I mean by how simple it is. Okay, so this just finished and then we receive the result in make.com dot. What we need is we need the default data set id here. So I'll go back in a before and then we need to go to the get data set items module. And what I'm going to do is I'm going to just feed in the data set id hard coded here. I just copied it directly from the output of watch actor runs because I just want to test this flow and this is just going to be the beginning of my test from now on. So we're going to right click this run and now we see all of the data that we need. We got the title, we got the URL, we got the author, we got the raw article here, this HTML and yeah, we got everything else too. So that's pretty nice. Okay, great. So now that we have that, I want to convert the HTML into raw text. The way that I personally do this is I go HTML and then text parser will come up and then we just go HTML to text. And what we want to do is we want to feed in this raw article here which has all the HTML. Now if I click run, it'll watch the apeify actor run again. So what I need to do in order to test just from the get datasetite items onwards is I need to just move this trigger over and then I need to run from here. It's going to tell me that a transformer should not be the last module in the route. That's okay. What I'm going to do is it looks like I got three results here, which is interesting. I don't know why I got three. So let's see what happened. Okay, so a couple things here. It looks like I'm getting text that isn't necessarily the article that I want. Oh, and I guess it's running three times because there are three articles. Duh. Yeah, it looks like I'm not getting just what I want. I'm getting some additional information that ends in share and then my post begins. So this tells me. Okay. And then it ends with Nick. So this tells me that assuming all of the posts have the same format, where share is the last thing before the article begins, we could probably, I'm sorry, I'm looking at the input there. We could probably just parse it with like a line, like a separator. We'd split it at share and then we just get everything beyond this line. Okay, that looks good. Same thing. And then bundle number three, share as well. So this looks pretty good to me. Yeah. But aside from that, we just get the text, which is nice. So we can feed that text into AI. These articles are pretty short. This may not work if your articles are a little bit longer. I don't mean it's not going to work, it's just not going to work as well. Essentially, the way that you do rewriting is you break down posts into bullet points and then you, you re expand the post from the bullet points back into full articles. So if it's super long, sometimes AI won't do a very good job of it. But that's all right. What I'm gonna do next, just for completeness's sake, is I'm gonna go over here to set multiple variables. And then I'm just gonna call this rawtext. I'm gonna feed in raw article and then, oh, sorry. I'm gonna feed in text and I'm gonna use the split function. And I'm going to split this right at share. Then what I'm going to do is I'm going to get the second result, which is everything after the share. Now, there may be multiple shares, so I'm just going to run this first and see if there are multiple shares. I'm going to have to adjust my approach a little bit. But no, it looks like this worked perfectly and I got the second half of the response, which is just my article, which is what I want now. I can't guarantee this is going to work every single time. Let me just see if there's perhaps anything else that I could use instead of just share. So what I could probably do is I could do Newline share. Newline. But I don't know if that'll be perfectly represented here. So let me just see. We'll go Newline, share and then new line. Gonna run this again? Yeah. Okay, that's better. If you think about it, the word share in and of itself may come up again at some point in this article. You know, since the first few results look to be they're just showing other blog posts. If one of those blog posts had the word share in the title, it would have split it inappropriately. So this way at least insulates us a little bit. And we're only splitting if it's new line share new line, which is presumably a lot less common. So now we have a variable called raw text, which is our, you know, obviously our AI ready article, and we need to rewrite this. So I'm going to add an OpenAI module here, and I'm going to have to do a fair amount of training on this to make it work, and I'm going to be including all of the prompt stuff like that. So you guys are going to have everything that you need in the blueprint print at the end of this video, but I'll walk through some of it with you guys and then I'll just skip ahead. So first things first, I'm setting Max tokens of 4096. If I go down to advanced settings, I set the temperature to 0.7 all the time. It's just my personal preference, but I find the results usually a little bit better that way. And there are a couple things that we're going to change. If you guys have seen me do previous AI stuff, we're going to set the frequency penalty and the presence penalty to values above zero. The reason why is because the way that frequency penalty works is it penalizes the likelihood of new tokens based off of their occurrence in the text so far. If they've occurred before many times, it'll make them less likely to occur again. So if you think about it from a rewriting perspective, if my text that I'm rewriting says, hello, my name is Nick, it will decrease the likelihood of saying those exact same words verbatim as an output. If you increase the frequency penalty and the presence of penalty, the two work quite similarly. Presence simply penalizes if the text is present ever. So it's sort of like a zero to one. It's like, is that text present? If no, then same likelihood of occurring? If yes, then penalize this flat amount. Frequency penalty is cumulative. So if the text occurs more often, then it will penalize it more. But in my case, I'm just going to do 0.6 here and then I'll do 0.5. So those are sort of like the advanced tweaks that I'm making. Next up, I'm going to define some simple prompts. The way that I always do things is I will always define a system prompt ahead of time where I have it define its identity basically, and I'll usually say something along the lines of you're a helpful, intelligent writing assistant. And then for the next prompt I'll do user. And I'll say your task is to convert a blog post into a list of extremely detailed bullet points. What I'll do next is I'm going to train it. And the way that I train it in my case is I usually provide it three examples of what I want it to do. And then on the fourth I provide it the actual data. And I say, go ahead and show me what you can accomplish. And the way to do that systematically and in a structured manner is you start off with a user prompt where you give it an example, and then you have an assistant prompt where it answers the example. Then you do user again, and then it does assistant again. Then I do user one last time and assistant one last time. And at the end of this it's reasonably intelligent and it sort of knows what I'm looking for. Now, this is beyond the scope of this video, but what I'm going to do over the course of the next few minutes before I, after I cut is I'm just going to go out, find some blog posts and then convert them into very detailed bullet points just to give it an example of what I want it to do. And then on the back end I'm going to do the inverse. I'm going to create another GPT for prompt, our module, and then I'm going to have it take a list of bullet points and convert that into a full article. So again, this is sort of manual work that you're only really going to have to do once. After you do it once, you're going to be able to run systematically on autopilot. But you know, like anything worth having takes a little bit of hard work at the beginning. So I'll see y'all in a minute. All right, so we have two a modules. Now, the first was the same one that I showed you guys previously. I've just went and I've added a bunch of examples just using my own data, because I'm using my own data. And I actually used to run a newsletter called the cusp, which has a very particular format. I imagine this is probably going to do a little bit better on my articles than it will for most other articles, but you guys can just change the prompt up a little bit and make sure that it's not super structured. If you guys don't like the outputs, essentially what I've done is I've went through my blog and then I got a couple of blog posts. So this was one on how AI models can now surf the Internet for you. Keep in mind, this is like two years ago, so things have gone really, really far since then. And then I had an assistant prompt where I'm putting basically just a bunch of bullet points of the article. And these bullet points are just summaries of what the article is about. I had AI help me generate a fair number of these. They're not perfectly formatted, there's a bunch of random spaces and stuff like that. I just don't really give a shit if there's additional spaces. It's not going to hurt the quality of the outputs at all, because I'm just going to be reconverting the back into text anyway. So feel free to look through the blueprint if you guys want to take a peek at that. And then what I did is, for the other example, I did the same thing. I went through my blog and then I just copied in an article, and then I had AI help me generate some of these. And then I went and I did some fine tuning and tweaking. And then what I noticed is when I had three examples, the output was a little bit poor. I imagine this is because the articles are pretty long, and so if you have three, you know, it's like 700 words each. Your prompt is now 2000 words, or maybe 2700 tokens. And the longer the token count is initially, typically the worse the performance gets. So I sort of balanced it between the improvement to the output that I got as a result of providing two examples, and then the decrease in the quality of the output that I get by making it super long. And this is what I ended up with. My last prompt is just the user prompt. We're feeding in the raw text from module number eight over here. And then I did the exact same thing just in reverse over here. And the good news is I was actually able just to copy over the bullet points from the previous and then just reverse the order. So now the bullet points are in the user prompt and then the full article is in the assistant prompt. And I'm just trying to get it to learn a relationship between the bullet points and then the article. Just like previously, I was getting it to learn the relationship between the article and the bullet points. So, excuse me, the end result is actually pretty good quality wise. If I clicked okay here. If I go back to get data set items, I changed the limit to one just because I only want to test this out on one of the pieces. I don't want to do it three times every time I run a test that would just be a little operationally intensive for basically no positive benefit to me. It's going to take a little bit here. So, you know, it is processing a fair amount of text. So it looks like this is done. It's generated a bunch of bullet points. We go to result here. We'll see, you know, it's changed the headings, it's changed the output like text a little bit. It's close enough that I imagine it would rank for the same thing that probably the OG article was ranking for. Although it's far enough away that, you know, you have a little bit of plausible deniability if somebody hasn't read the initial piece. And then yeah, we have the full piece over here. You'll see that the full piece is written in Markdown, which is nice and it's very useful for us because we get to convert this right back into HTML and then we're going to use that to make a Google Doc. So I have an HTML to markdown module over here. Oh, sorry, wrong one. We want markdown to HTML module over here. What I'm going to do is I'm going to feed in the output and then I'm going to make a Google Doc. It's going to be create a document here and then I'm just going to select the account that's relevant. This name of the article is just going to be, I guess I'm just going to put the title in for now. We can obviously have it generate a new title for us as well. And then I'm going to feed in the HTML for now. I'm just going to have the document location be the main here. And then what I'm going to do is I'm going to hook this up to a Google sheets, add a row. Now I'm going to want to store a bunch of stuff here and I haven't actually looked through the data to do it. So let me just look through this data. What do we actually want to put? Well, first we're probably going to want to put the title of the original article. So let's do first let's do date scraped. We'll do original title, we'll do original URL. Let's do author, author URL published at. Author URL published at. And then we also have the body and I don't actually fully know what the body is. Okay. I'm not going to worry about the body. And then we can also just post in the raw article. Honestly, it's going to be a fair amount of text and data, but screw it. No real reason why not to. I think I just canceled out my published ads. We'll go raw article here and then I'm going to do parasite title and then we'll do like parasite article. Keeping in mind that I haven't actually rewritten the parasite title. Maybe I'll do that in a sec. And then I'm just going to make this a nice color. There you go. That looks nice. And then you guys will know if you watch my videos. I'm going to change this to enter, make it look nice and pretty, space it out a bit. And then I'm going to go back over here and I'm just going to refresh all the headers. And the way that you do that is just click this refresh button by table contains headers. And now we actually have all the data that we need. We have the date scrape. We have everything. So first I'm going to put the date in. I'm going to do that by going to the calendar section here and then clicking now. And then the rest of this is pretty straightforward. I'm just going to go title URL author author URL published at Raw article, then parasite title. We don't have one of those yet, so I'm not gonna worry about that. And then parasite article, parasite article. Let's just do the raw HTML or raw markdown coming out of this module. And then let me create a parasite title really quickly. I'm gonna call this rewrite title. This will be bullet point to blog. And then this will be blog to bullet point. Just so I can keep myself a little more organized here. And then also your task is to rewrite a title, retain the topic and the purpose of the title, but write it in different words. Now I trust this so much because it's such a small little requests that I'm not even going to worry about giving it a bunch of examples. I'm just going to feed it in a title and then write only in the title. Your output should be only the rewritten title, nothing else. There you go. And then over here in the Google sheet I'm just going to add in that parasite title which will be in the result here. And then there's a URL here. Sorry for jumping around all the place. I'm not going to write the raw text here in parasite article. What I'm going to do is I'll do webview link. There we go, from the Google Doc. Okay, voila. Let me save this puppy. Now let's give this a test run here. So what it's done is it's grabbed our first dataset item, then from the default data set that we scraped previously, it's then parsing it. So now we have text. It's then setting the variable where I split it based off of the presence of the newline share a newline. We're then converting into bullet points, then converting the bullet points back in a blog post. We're then rewriting the title, converting it from markdown to HTML using that HTML in a Google Doc. We'll show you in a second. And then we're adding a row to our Google sheet. It's going to look kind of ugly here because Google Docs always, it bolds and it doesn't. Sorry, I'm going to unbold and then it uses kind of weird formatting here. And then I'm not actually going to double click this because it's going to make this whole thing super long. But what I am going to do is I'll just make a couple of these a little wider so we can see the information that we want. Chat GPT scores. Okay, maybe that's good. We got the original URL here. We got the author, which presumably isn't super important, author URL, which isn't super important. Raw article which isn't super important, but we're going to keep it anyway. Then we have parasite title, then we have parasite article over here, which I'm just going to make nice and wide. Okay. And let's actually go see the finished product here. Okay, that is ugly. That is actually not what we wanted. That's a shame. Why? Seems like we have a bunch of p tags for some reason. So let's see, what did we output out of this? Yeah, we did output a bunch of p tags, which means what was our input here? Input was just the text. Oh, I got it because I didn't actually specify that it needs to write it in Markdown. That's my bad. We'll say write the article in Markdown ATX format and then I didn't actually go through and test this. So we'll do this. We'll do this. Yeah, I need to give it some examples here. Otherwise I'm not going to get good results. So this is me just basically having it write markdown for me. And the way that I'm doing it is I'm actually going through and just adding a bunch of headings. Okay, that's number one. We'll do number two here. Okay, just about done. Alright, there you go. Should be good. We're gonna give that a round two. So I'm gonna delete this old crappy terrible result that I never want to see again in my whole long life. We're gonna rerun this puppy and gaze upon my work, ye mighty, in despair. I'm sure it'll be good quality. This is cope. No, this is going to be great. Awesome. So we're creating a Google Doc this time. Let's just make sure this Google Doc looks good. I'm going to webview link and I'm opening a new tab. Looks nice. Looks very nice. Now I'm realizing because I wrote this article myself and I called my newsletter the cusp a bunch of times, this isn't a very good example for parasite purposes because literally it's a summary of the cusp AI news and implications. We could improve this just by going through the prompt and just removing any terms that refer to the cusp so that it doesn't think that it's a newsletter. So we can do that pretty quickly. But anyway, if we go back here, we just got to fix the color again because I deleted the last result and the last result is what tells the next result what color and format it should be. We got everything that we want. Looks like the original title was chat GPT score, 70% of the sample USMle exam. And then this one was Chai Chipot, 70% on a sample USMLe just with the acronym written instead. I think these are sufficiently different. This could be like a news parasite probably reasonably quickly. And then we have the, you know, as I opened up previously with the Parasite campaign here, you know, keep in mind it's not super long, it's about the same length as the previous one. And you know, it says Nixerif and all that stuff. So this may need a little bit of tweaking whereby you jump into the prompts and then you say if like a name is referenced or something, then like don't include the name, just give it a bunch of don't do this, don't do that, don't do that. Probably do pretty well. But you know, even as it stands, you could totally whip this up on a WordPress website and just set this to scrape every single day for whatever frequency you want and then have it go off to the races. The last thing that I'll do just for completeness sake is if you wanted to say, publish this on WordPress you can do so pretty straightforwardly make there's actually a WordPress module. So we go to WordPress here and then the one that you want is create a post and then you need to create a connection. And the way that you do that is you basically grab your WordPress URL. So let's say my WordPress website is called my WordPress site. And then you go.com and then you just go wp JSON. And so whatever the title of your website is.com would be the rest API base URL. And then what you need to do is on WordPress you need to go and download a API connector. So you need to go WordPress Integromat Connector here. And then you need to download this plugin and stick it onto your WordPress website. And this will basically give you an API key that you can use to connect to. Integromat is the old name for make.com or make.com today. And then you'll be able to actually jump in and create a post and you'll be able to define the title, you'll be able to define the tags. You may even be able to schedule, I believe, which is quite nice. So if you wanted to run this puppy on autopilot, that's how you do it. Last thing I'm going to do is I'll show you how to schedule this. Just go back to Apify, go down to schedules, go create new. And then what we need to do is whatever frequency you want this to scrape the data source, just go daily, weekly, yearly, whatever. And then under here where it says no actors or tasks yet, just go to actor and then type in the scraper that you want. As I mentioned previously, we're just feeding it the same author name every time. You don't have to do it this way. You can feed it a different author name every time. And if you wanted to do it that way. So if you wanted to like store author names in a sheet and then call a sheet every time, you would instead trigger this in make.com itself. Just going to apify and then run an actor. And the way you do is you basically select the actor here, which would be oh boy, this one here. And then you'd need to put in some input Jason. And the input Jason you put in would look like this, where Nixorayef would be your variable from, I don't know, like your Google sheet author or whatever that you're pulling in from. You can add another make module before that, have that be the trigger. It'll run and then you can have a separate scenario that actually watches the actor runs, like I mentioned and I yeah, you know, look at it that way. You could also click run synchronously back there and then it'll just like add it to the flow that we have here, which is also quite useful. Anywho, I'm going to export this blueprint and then voila. There you have it. There's your medium to Google Doc parasite scraper, and I showed you how to do it in WordPress as well. Thanks so much for watching this video. Had a lot of fun putting it together. If you guys want more videos like this, then just leave a comment down below requesting maybe a specific scraper or a specific platform and I'll happily whip it up for you. Otherwise, I'd love it if you guys could take a look at my community. As I mentioned to you guys previously, my community has been bumping recently. I was not expecting to get 150 people within about 72 hours. This is way surpassed my expectations and there is so much juice dripping from every post here, it's insane. So if you guys want to join this community, keep in mind the price is increasing every 40 members or so. It's coming up reasonably quickly. Take a look and if it makes sense for you, hop on board.