Комментарии:
AWS is a great way (or a gateway) for you to get Jeff'ed.
Ответитьfast forward to 2023, prime video team agrees with Ben.
Ответить2023 , amazon just confirmed you were right
ОтветитьThis guy thinks kubernetics doesn't run on server
ОтветитьServerless doesn't make sense for your usecase may be ?
ОтветитьI like aws lambda and google firebase both are great honestly
ОтветитьWait 500ms for a single request to warm your AWS function (which can be a timed event) OR pay for an entire K8s cluster and destroy your wallet.
I'll probably stick with the first option.
You are focusing too much on synchronous API calls. If you are supporting your API with serverless functions then yes - cold starts suck. If, on the other hand, you are using serverless for async operations in event-driven systems, where response times do not exist - then cold-starts do not matter that much.
Try this example - API running on a kubernetes cluster (or ECS, whatever you like), at the point of receiving requests it publishes an event to SNS, you have SQS subscribed to this and Lambda triggering off that event, processing some data asynchronously.
Or another one: API call -> kubernetes -> something saved into DynamoDB -> Lambda triggered from DynamoDB stream.
Real world examples would be: user buys something in an online store, you do not need to generate an invoice "in real time" before showing them purchase summaary. A lambda asynchronously triggers of that event, generates the invoice and that gets emailed.
THIS IS where serverless TRULY SHINES.
Thank you.
ОтветитьThey sounded like a stupid idea the day they first appear... Because IT IS a really stupid idea.
ОтветитьIn the past, I would have agreed with you on the two things you said in the first sentence.
Today, I understand that there is sometimes an appropriate use of Serverless. For example for models or calculations. In my opinion less for routine things like image resizing service for customers
Serverless requires you to trust the provider to not fuck you over
ОтветитьI'm confused by headless ... girls
ОтветитьWould love your opinion on Google's Cloud Run, which can handle multiple requests per instance, and you can eliminate cold start by processing the SIGTERM sent by cloudrun just before it terminates to quickly let the cloudrun call itself, thus provisioning another cloudrun to be in a ready state, while only paying when it is processing requests (assuming you allow cpu throttling when no requests are being processed)
Ответить"I wanted that. And I wanted that serverlessly." 🤣
Ответитьsomewhere I read/saw a vid that
monolithic is better and cheaper and that amazon saved lot of money by switching to monilithic.
Not sure about it though.
True serverless would be to run that image resizer on the client duh
ОтветитьCold start doesn't matter practically. Having even 100 calls to the Firebase API per day will result is blazing speeds (pretty much equal to ping between two end points).
Ответитьthis video aged beautifully
ОтветитьCan vouch for CloudFlare workers, it's the best of a bad bunch
Ответить"This thing, that I don't really know much about, seems to not work well for all use cases. Therefore, it is trash."
ОтветитьI wonder if people's opinion has changed after AWS itself said it's more expensive to run serverless stuff.
ОтветитьThe point is to keep starting new things so there's a limit on how far you'll go with the completely useful tech we already have, and that being for the sake of opening new opportunities for business. It's kind of at the point where there is no net gain in usefulness or functionality from the end user perspective, and the change occurs solely for the sake of fog of war. You are free to resist and you should. Make something useful from fricken PHP just to spite the new crop of CS grads, lol.
ОтветитьLook, you need to consider your use case before you decide to use a microservice. For medium/enterprise size business, using microservices for some things makes a lot of sense, and is a lot easier than maintaining monolith architecture.
Also, Azure functions can live as either standalone functions, or you can spin up an app service. You'd likely not run into cold start issues for that case, but you'd then be paying for an app service.
Third, pure speed isnt the only reason you'd go with one provider or another. For functions that get called frequently, you should be running something like an app service anyway.
never write in pure serverless or containers.. a hybrid will be better so you don't end up fighting with the tool
ОтветитьThe real winner in GCP is actually Cloud Run, not cloud functions. Instead of using small functions you just upload your whole docker container with your app written in the language and framework of choice and that's it.
ОтветитьMy holy goodness! Resizing an image in native C/C++ takes a few milliseconds and uses up a maximum of 2x the uncompressed image size in memory (say 25mb). and the binary size is like 100k max. Modern computing I guess.
ОтветитьMcLovin talks about serverless. YES!
Ответитьserverless... you mean a laggy pod w/ startup delay running on someone's server.
ОтветитьIve used firebase and I like it a lot. Very easy to setup and deploy code. I’d say the biggest issue with serverless functions at least with aws is just learning everything. When I first started aws it was like trying to solve a puzzle without a picture to work off of. Now I just use terraform to automate all of the infrastructure setup. Makes it a breeze to provision anything I want in minutes.
As far as cold start latency. It doesn’t effect much in actual production applications that don’t require super fast speeds I’ve found
"I don't understand something, so I avoid it like the plague." That's a winning mindset, let me tell you. /s
ОтветитьFunny how well this aged. Amazon Prime team surely watched this video.
ОтветитьIt is worth noting that AWS Lambda CPU speed scales with RAM up to 1GB. Then you get more cores which in case of nodejs is not useful.
So if your nodejs app consumes sub 1gb ram, you will get maximum performance at 1GB ram.
he was a hero.
ОтветитьAmazon was just 2 years behind Ben in figuring that out. The same Amazon might reject on a DSA interview. 😂😂😂
ОтветитьNeither do React Hooks but your generation keeps on shitting them out.
ОтветитьWhy can't you just do the resizing on the client?
ОтветитьHave no idea why this two year old clip popped up in my feed, would be interested to hear Ben's opinion on the topic today
ОтветитьIts all marketing BS...
Ответить#1 How often are you resizing an image? #2 Resizing an image can be done on the client side. Let the user's computer do that work, it's free for you. #3 If it stays warm for a short period of time for functions that happen more often than that time. For ones that happen less often but you still want them warm you can tell it to stay warm.
ОтветитьThe language you use is never what makes things slow, that performance difference is in the microseconds. It’s always network
Ответитьthe intro tho. haha
ОтветитьYes, if you have enough, steady traffic you probably should put your code on a server. If not then Lambda is an option if you have really spikey loads.You're going to be setting up API Gateway either way .
ОтветитьYour mistake with serverless on GCP was not using Go.
Ответитьmy man knew....
ОтветитьWe have plenty of processes that take minutes if not hours to run. Waiting a few extra seconds on few parts isn’t a big deal. All depends on what you are doing
ОтветитьComing from the future, you were right
ОтветитьTHIS AGED WELL!
Ответить