Комментарии:
Thank you so much. I have been wondering what it was for a while. In 10 minutes you explained it very well. Its much easier to understand the the basic idea behind the concepts concepts from your videos than from papers/books.
ОтветитьSo, 5 years ago this was your last video about MPC under "Control Bootcamp" series. So unfortunately, MPC stopped here with Professor Steve! Anyway, thank you for the clear explanation.
ОтветитьHi Sir! Thank you for this valuable splendid axplanation. Could there be an example from you or are there any sources to look for impelementing?
Ответитьthank you for this amazing lecture series. please make more videos about model predictor controll.
ОтветитьGreat!
ОтветитьWhy we apply only the first control input proposed by the optimizer istead of using all the controls.
ОтветитьHello professor! Could you explain invariant sets and maximal control invariant sets? I'm having trouble grasping the concept.
Ответитьwhat I dont understand is why one need output constraints especially soft ones since we have anyways the set value?
Ответитьwhat a fantastic explanation! Thanks
ОтветитьTwo questions:
- Why do we bother optimizing over the entire horizon when only the t+1 step is necessary? Unless the optimization solution is generated starting from the end of the horizon backwards.
- Does it often happen nowadays to have real-time systems running their optimizations over the air on a powerful backend (cloud),. What types of systems are suitable given the network latencies of doing so?
Great explanation
ОтветитьGreat video Steve! Does anyone have a suggestion for creating a system model to use with MPC? I have a large amount of historical data for a mechanical vapor recompression system.
ОтветитьNicely explained. Inspirirng, thankyou
ОтветитьI want to ask,
why do I need to calculate the horizon,
if I only ever need the first point of that horizon ?
Great video! I appreciate that very much.
Ответитьget some new pens
Ответить@Steve Brunton Suppose we have a nonlinear dynamics and obtained a linear representation by Koopman operator. Is it ok to apply to Koopman linear representation? If it's ok , which is more reliable? linearize on equilibrium point or koopman?
Ответитьbest video on MPC period.
ОтветитьAll these comments and none of them mention how well he writes backwards.
Ответитьgreat video steve, thanks!
Ответитьbruh. How do I like this video twice? For anyone who didn't realize, he's writing backwards.
ОтветитьSteve, your lectures are very well structured. But I would like to mention, that after plunging into more applications like Electrical power system stability and control (Prabha Kundur : EPRI) and handbook of electrical power system dynamics (IEE press), I start really to start to understand the whole stuff 'physically'. By the way, Power distribution entails huge amont of state variables, and traditional way of numerical computation of eigen values are no more valid (Francis / QR) which I think are used in MATLAB/OCTAVE, or Julia,. Special techniques, 'non' theoretical mathematical based are used : software AESOPS-PEARL. To make long things short as you used to say, you need to iterate with various sources of learning, and ideally have real life application (as you do for some cases) to grasp the essentials (to get the 'aha' moment). But, well done. Later I saw your lectures on turbulent modal modeling, interesting, but as an 'old : 40 years experienced' practicising engineer, the whole stuff of turbulence made me depressive as never directly applicable. You have to tune in such a way that at the end all the effort trying to implement a theoretical based model is gone and useless. You need to put in tuned coefficients (8 year personal experience on nuclear power plant simulators development). Regarding e.g. the PID constants from the real power plant were never transferable to the simulator model, as non linear effects like friction, hysteresis that exist physically are barely modeled on numerical models, reason are, the useless complication, and more fundementally, no way of measure them. The differences can be as high as 300 % :(
Ответитьokay now where do you use it
ОтветитьI have been seen your lecture since I admitted in my master course of autonomous vehicles engineering. After a some time ,wish to do PhD under you!
Ответитьsir which paper you have referenced in this?
Ответитьwhat action will the mpc take when reached setpoint
ОтветитьBrilliantly explained!!.
and I'm looking forward to your lecture on Markov decision process.
I'm working on implementing a hybrid controller by combining the benefits of both MPC and Markov Decision Process.
I hope this hybrid controller would be more efficient in terms of computational time!!
This is the first lecture on MPC that I have seen that actually made sense to me and allowed me to understand what MPC is. The previous ones I attended were a soup of symbols and jargon with no meaning to me. Thank you so much.
ОтветитьDear Dr Steve Brunton,,why didn't you explain this superb explanation (by you ) on MATLAB?
ОтветитьThank you for your videos! Are there any stability guarantees when using MPC on nonlinear systems?
ОтветитьFinally I made it to the last lecture of this series! After some revision I will start the Data-Driven Dynamical Systems with Machine Learning series. Thank you so much Steve!
Ответитьthanks
ОтветитьI work in MPC for autonomous cars...
Things will always be easy and smooth in simulation...
Closed loop on real time platform is where i m facing the heat....😬
That is why i love control systems ..✌️
Thanks for this video ! Could you perhaps do a video of a matlab example of application of MPC on a linear model of system identification ?
In practice, we usually only have data measurement of a system so both A and B are unknown in our model. Would be great to do a full example considering this.
Thanks for your work !
Thank you for the amazing video on this topic.
I am interested in adding stochasticity to nonlinear MPC, can you please refer me to some references on this topic?
Steve, thank you for the great lecture series. You'll be happy to know that people are still sitting down and watching the whole series.
I've seen you answer relatively recent comments in some other video, so if you see this, since you mention a number of times where you would make your students prove an equation or another, I was wondering at what level of learning you'd place the material in this bootcamp? I'm a PhD candidate in Wind Engineering so I have pretty much no background in control engineering, and it was easy enough to follow, but I know that this would have been incomprehensible for me just a few years ago. For software or mechanical engineers, would this material be covered in undergrad? Or are these graduate-level concepts?
Does the optimal control signal have to be a step function?
ОтветитьWow Steve. Your channel is incredible. I am a ME student at Purdue, and I love learning these kinds of things. I have learned so much from your channel! Thank you!
ОтветитьIs MPC a numerical command?
Ответитьvery good video! thanks!!!
ОтветитьLearning Neural Weights + update Inference on the fly. I wonder if Tesla is already doing this for updating their Autopilot with edge cases?!
ОтветитьI don't understand. Isn't k equal to time t? What does k+1 imply ? Is it the shift of the time horizon?
ОтветитьYou have nicely explained a complex topic.
I've designed an MPC controller for a power converter. But don't know how to tune the controller. Because the variation in the load affects it's performance. Could you please suggest something regarding this?
Liked the lecture. Thank you sir.
ОтветитьIs MPC a feedback control?
ОтветитьWhere is the continuation of this video?
ОтветитьReally well explained!
Ответитьhi... can I know what is the difference between model based control and model predictive control?? and is the model-based is one approach like Zigler and Niglust??
ОтветитьYou are writing backwards, reading it backwards, and still explaining things clearly... well done and thank you.
Ответить