Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a Reusable E2E Kubeflow ML Lifecycle #3728

Merged
merged 15 commits into from
Jun 11, 2024

Conversation

andreyvelich
Copy link
Member

Based on our recent discussion with @franciscojavierarceo I updated the ML lifecycle diagram in the architecture guides: #3719 (comment)
We can re-use this ML lifecycle diagram in each Kubeflow Component and explain the user value of that component.

I like the existing diagrams, but they little bit out of date.
I am happy to improve my diagrams based on your feedback.

Also, I removed unused images.

/assign @franciscojavierarceo @kubeflow/kubeflow-steering-committee @thesuperzapper @StefanoFioravanzo @hbelmiro

/hold for review

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a typo, should be Data Producers.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! @franciscojavierarceo I am wondering, should we add the Data Producers to the Offline Feature store as well?
E.g. Spark ingest data from Data Producers and extract features.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It also may be useful to add a Feature Extraction to the Offline Store to make it concrete how the offline store is used.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! @franciscojavierarceo I am wondering, should we add the Data Producers to the Offline Feature store as well?

Yeah, I think that's a great idea! That can get complicated if we're to get specific but I think if we're just generic and create a box like we do for the online store that works fine.

@StefanoFioravanzo
Copy link
Member

@andreyvelich What exactly are you trying to accomplish? I didn't fully get this part

We can re-use this ML lifecycle diagram in each Kubeflow Component and explain the user value of that component.

Is this diagram supposed to be re-used by each component, and if so, how do you envision that?

@andreyvelich
Copy link
Member Author

andreyvelich commented May 3, 2024

@andreyvelich What exactly are you trying to accomplish? I didn't fully get this part

We can re-use this ML lifecycle diagram in each Kubeflow Component and explain the user value of that component.

Is this diagram supposed to be re-used by each component, and if so, how do you envision that?

That's right. Please check these examples:

We can do the same for Model Registry, Spark Operator, Notebooks if other WGs agree with that.

What do you think about it @StefanoFioravanzo ?

@StefanoFioravanzo
Copy link
Member

Oh Ok now I understand your approach, I like this. You are proposing we build a canonical Kubeflow ML lifecycle diagram and then highlight what parts of the diagram each component covers.

So, based on this, I propose two things:

  1. rename this PR to better represent what we are doing (e.g. Implement a reusable E2E ML lifecycle diagram or something like that)
  2. Consider using and adapting an existing diagram. There are many E2E ML lifecycle diagram in the open source, widely used and promoted by large oragnizations. One option is to overlay Kubeflow and Kubeflow components on top of one of these

If you want to keep the focus smaller and have a quicker iteration on the existing diagram, I am fine with it and you can ignore the two points above.

@StefanoFioravanzo
Copy link
Member

cc @chasecadet can probably provide some good insight on this

@StefanoFioravanzo
Copy link
Member

@andreyvelich a very good open source diagram that we can reuse is this one by the AI Infrastructure Alliance. See here https://github.com/ai-infrastructure-alliance/blueprints

There is no explicit license, by the do write in the README:

Please retain the AIIA Logo on the diagrams when you use them, otherwise you are free to modify them in any way you see fit.

I think this would be a pretty good starting point for a reusable diagram. They have an editable figma file, and even an interactive version. Take a look at all the folders, there's various versions.

We could fork the repository under the Kubeflow org and adapt it to the various component. If we want we could embed the interactive diagram in our website. If we are unsure about licensing and reusability of that content, I can reach out to a couple of folks at AIIA.

@StefanoFioravanzo
Copy link
Member

I can see us doing something similar to this interactive version https://ai-infrastructure-alliance.github.io/blueprints/interactive-stack-diagram/stack.html where each option is one of the Kubeflow components. So you can see how the entire Kubeflow platform (we can have a "all" picker) covers the E2E ML lifecycle or based on
your a-la-carte choice

@andreyvelich andreyvelich changed the title Update Kubeflow ML Lifecycle Implement a Reusable E2E Kubeflow ML lifecycle May 6, 2024
@andreyvelich
Copy link
Member Author

rename this PR to better represent what we are doing (e.g. Implement a reusable E2E ML lifecycle diagram or something like that)

That makes sense, renamed it.

@andreyvelich andreyvelich changed the title Implement a Reusable E2E Kubeflow ML lifecycle Implement a Reusable E2E Kubeflow ML Lifecycle May 6, 2024
@andreyvelich
Copy link
Member Author

If you want to keep the focus smaller and have a quicker iteration on the existing diagram, I am fine with it and you can ignore the two points above.

To be honest, I have concerns with existing diagram, since it was implemented ~ 5 years ago which is very out-of-date. E.g. it doesn't include model fine-tuning which is the modern approach for model development, and it doesn't have online feature store. WDYT @StefanoFioravanzo @franciscojavierarceo ?

@andreyvelich
Copy link
Member Author

a very good open source diagram that we can reuse is this one by the AI Infrastructure Alliance. See here https://github.com/ai-infrastructure-alliance/blueprints

I like there diagrams, but it looks similar to what we have in this PR, isn't ?

E.g. the differences:

  • We simplify data sources for Data ingestion with Spark.
  • We don't introduce lakehouse concepts for Data Lakes.
  • We don't have model monitoring in serving to re-train model in production.

Maybe we can improve our diagram with additional stages ?
WDYT @franciscojavierarceo @StefanoFioravanzo

@franciscojavierarceo
Copy link
Contributor

I can see us doing something similar to this interactive version https://ai-infrastructure-alliance.github.io/blueprints/interactive-stack-diagram/stack.html where each option is one of the Kubeflow components. So you can see how the entire Kubeflow platform (we can have a "all" picker) covers the E2E ML lifecycle or based on your a-la-carte choice

I agree the old diagram is outdated.

I am much more preferential to a diagram that reflects the view of a Data Scientist and the needs in their workflow, which the diagram you proposed does. The AI Infrastructure Aliiance I think highlights things in a way that highlights the needs for different companies with different structure and, while that's helpful, I don't think that elicits clarity on the value of Kubeflow.

@chasecadet
Copy link

@StefanoFioravanzo finally getting to this! Before I say too much I'd like to take a step back because as we allll know "tactics without vision is just noise before defeat". I like the idea of an ML diagram. I would love to know what our vision for these documents is and how we are approaching this. Someone reads the diagram they learn X and then start building using Y and deliver Z value to their project/org.

Allow me to free associate here a bit on what I think would be interesting. I like the idea of talking about use cases for specific components, but I struggle with the idea of telling users what to do. I want to help them envision using these tools and enable them to creatively solve solutions. Another way to say this is I would love if the users told us what they use these components for in collaboration with our vision for these components. We as a community can provide guidance. If we act as a ground truth authority on use cases we might lose out on the value of new community members using the tools in powerful but unexpected ways we can later integrate into more robust use cases.

Questions I'd love to have answers to are:

  • What are the common use cases?
  • What are some considerations?
  • What pitfalls do we see?
  • How might we run into issues using these solutions in ways not intended?

We can touch on trying to say use KFP without a training operator to attempt to run an XGBOOST job vs using and integrating the training operator to show that you "can" do things in MANY ways but may lose out on overall value trying to redo our engineering efforts through your own means..

That being said, stands on soap box
I love calling out the model development lifecycle according to this community and placing components within that lifecycle as suggestions. Some are more concrete than others (you can't use Kserve to train a model) but also showing that we have a flexible, composable, and integrated solution you can port anywhere to run MLOPs at scale. I think @jbottum said it very well in that the power of KF is more than just our components but the community. As we grow we benefit from continuing to demonstrate the tribal community knowledge we are building and sharing with the world so teams can "Go with the Kubeflow" knowing they are part of a community that is writing code with a purpose using learning from many orgs, communities, and perspectives to build a world class MLOPs solution vastly democratizing access to ML/AI across the industry. Showing others "What's in it for them" using KF will bring them into the community and ensure it stays healthy and fuel the next generation of contributors as we go from incubation to graduation and beyond. hops off soap box

Maybe I missed the point of the CC. I also have a chapter in that class I built on the model dev lifecycle. I officially own the content and we can use it how we see fit to create some MLOPs like documents.

@andreyvelich
Copy link
Member Author

@StefanoFioravanzo @franciscojavierarceo I've made a few updates to the lifecycle diagram based on the feedback.
Does it look good to you ?
I think, we can merge this PR before Kubeflow 1.9 release.

@franciscojavierarceo
Copy link
Contributor

@StefanoFioravanzo @franciscojavierarceo I've made a few updates to the lifecycle diagram based on the feedback. Does it look good to you ? I think, we can merge this PR before Kubeflow 1.9 release.

Looks great!

@andreyvelich
Copy link
Member Author

/hold cancel

Signed-off-by: Andrey Velichkevich <andrey.velichkevich@gmail.com>
Signed-off-by: Andrey Velichkevich <andrey.velichkevich@gmail.com>
Signed-off-by: Andrey Velichkevich <andrey.velichkevich@gmail.com>
Signed-off-by: Andrey Velichkevich <andrey.velichkevich@gmail.com>
Signed-off-by: Andrey Velichkevich <andrey.velichkevich@gmail.com>
Signed-off-by: Andrey Velichkevich <andrey.velichkevich@gmail.com>
@andreyvelich
Copy link
Member Author

@thesuperzapper @StefanoFioravanzo @franciscojavierarceo @hbelmiro I removed changes from the start page in this PR, I will create separate PR to update it.
Are we ready to merge this PR ?

Signed-off-by: Andrey Velichkevich <andrey.velichkevich@gmail.com>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only persona shpwn here is ML Engineer which in my opinion is not correct as Data Preparation can be done by a Data Engineer. Similarly Model Development, Hyperparameter tuning, Model Training can/will be done by data scientist.
My suggestion will be to remove the ML Engineer Persona

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right, but in different use-cases Data Processing can be done by ML Engineerings. Especially when Spark integrated to the Jupyter Notebooks.
This is just an example of ML Lifecycle, I am not sure if we can cover all use-cases and personas here.
WDYT @StefanoFioravanzo @franciscojavierarceo @hbelmiro ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that data preparation is made by data engineers, but considering we need show an e2e flow that covers all kubeflow components and we just brought spark operator to the ecosystem, we should cover data preparation too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only persona shown here is ML Engineer which in my opinion is not correct as Data Preparation can be done by a Data Engineer. Similarly Model Development, Hyperparameter tuning, Model Training can/will be done by data scientist.

This varies heavily by company. I've worked at many places where MLE does this fwiw.

I added the persona to highlight explicitly how an ideal user should think about this workflow. Though maybe this could be amended to add more personas. I worry about the clarity though.

#3728 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that data preparation is made by data engineers, but considering we need show an e2e flow that covers all kubeflow components and we just brought spark operator to the ecosystem, we should cover data preparation too.

@rimolive @andreyvelich I am 100% with you on that and the answer to this depends on the org structure or the MLOps literature one follows. My question really is that from a tool/platform perspective, should we be putting personas on the documentation as a lot of it are grey areas. Also, given SparkOperator is fully onboard with Kubeflow, should we put that in the main architecture diagram or not? I have put this as a comment on the main PR as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my perspective this is out of scope of this PR. This PR is initial change to the architecture page to make sure our lifecycle diagrams represent up do date version of Kubeflow components.

Also, CNCF white paper already has personas explanation which might be useful for orgs who are looking for Kubernetes as primary platform for AI/Ml infra: https://www.cncf.io/wp-content/uploads/2024/03/cloud_native_ai24_031424a-2.pdf
cc @zanetworker @ronaldpetty @raravena80

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also suggest splitting the Model Serving box in two i.e. Model Serving and ModelMonitoring/Drift detection as KServe has components to do that

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E.g. Model Monitoring, Drift Detection is part of model serving from my point of view. If we want to split this block, we should say: Online Inference vs Batch Inference, but I am not sure if we need to explain such details.
It's like with Spark, you can do Data Ingestion, Data Processing, Feature Engineering, etc., but we haven't explained everything in this lifecycle diagram.

I hope that more detailed diagrams can be showed in the KServe docs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andreyvelich as a consultant I can vouch that not many people know that kserve has drift detection capabilities and hence m request to put it there.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right, that is why they should explore individual components docs for it.
E.g. if you know that you need the model serving component for your AI/ML infra, you will explore the KServe docs.

It is just impossible to show everything in this end-to-end ML lifecycle diagram.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In line 40, the definition for Data prepartion can be reworded to say that

In the Data Preparation step you ingest/raw data and transfer it to perform feature engineering to extract ML features for the offline feature store, and prepare training data for model development. Usually, this step is associated with data processing tools such as Spark, Dask, Flink, or Ray.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by you ingest/raw data raw data?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry thats was a typo

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess, the idea of this statement is to say that you use Spark to inject raw data and process it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only persona shown here is ML Engineer which in my opinion is not correct as Data Preparation can be done by a Data Engineer. Similarly Model Development, Hyperparameter tuning, Model Training can/will be done by data scientist.
My suggestion will be to remove the ML Engineer Persona or show other personas as well
Also, I would also suggest splitting the Model Serving box in two i.e. Model Serving and ModelMonitoring/Drift detection as KServe has components to do that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only persona shown here is ML Engineer which in my opinion is not correct as Data Preparation can be done by a Data Engineer. Similarly Model Development, Hyperparameter tuning, Model Training can/will be done by data scientist.
This varies heavily by company. I've worked at many places where MLE does this fwiw.

I added the persona to highlight explicitly how an ideal user should think about this workflow. Though maybe this could be amended to add more personas. I worry about the clarity though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@franciscojavierarceo these are my thoughts as well as this gets political with who does what as there is no simple answer hence I was wondering if we should get into personas at all or not

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that make sense. I definitely understand how it can be a rabbit hole. I am generally customer-centric so my goal was really to just elicit the value-prop for people who are quickly thinking "why should I, as someone who builds models, care about kubeflow?"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the main goal and motivation of this page is to explain the value of Kubeflow ecosystem to our users.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only persona shown here is ML Engineer which in my opinion is not correct as Data Preparation can be done by a Data Engineer. Similarly Model Development, Hyperparameter tuning, Model Training can/will be done by data scientist.
My suggestion will be to remove the ML Engineer Persona or show other personas as well
Also, I would also suggest splitting the Model Serving box in two i.e. Model Serving and ModelMonitoring/Drift detection as KServe has components to do that

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the issue here is the lines are blurred, and there is no prescriptive authority as to how this works. What I would do is call that out. "To scale, you have to specialize," but right now MLOPs (and Kubeflow) are incubating, so the average user wears many hats. If an MLE wants to do data prep or a data engineer or a computer engineer nothing stops them if they aren't leaving other work untouched. Ultimately this is a business and engineering mgmt conversation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 @chasecadet

As mentioned in another comment, I've worked at several places where the MLE was responsible for all if this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chasecadet @franciscojavierarceo the question is not who does what as it is very subjective, the question is that should we get into personas?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that makes sense.

Really I just wanted to provide high level clarity about the value proposition of Kubeflow for MLEs or data scientists or whatever they're called this week.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vikas-saxena02 @franciscojavierarceo THIS IS GREAT. So here is the philosophical/KF values questions. My biggest power as a solutions architect is saying " My customers commonly do XYZ". So we need to decide are we doing this as a text book style "this is the world we live in" where we need to point to an authority (@andreyvelich and I were discussing "who's ML Lifecycle are we referencing") or do we make this more community and experience based where we say "We commonly see MLEs within the Kubeflow community leverage these tools aligned to what we have defined as the ML lifecycle based on community feedback Etc... Andrey was mentioning the ML lifecycle we are using was sourced from the CNCF white paper by other professionals who worked to define it. That is totally fine but we need to give the lineage of our information, call out when it can be considered subjective, and also flavor what we are defining as something based on what we have seen in and agreed upon our community ( something that is powerful but is not necessarily the be all end all) and how new users can align themselves to it. We can also provide a place to discuss and challenge our ML lifecycle opinions but if we say "we commonly see data engineers using X" then its not necessarily us telling you what to do, but mentioning what we have seen so far and opening the door to new perspectives. This also helps us stay out of peoples scopes if they say "well the KF community said that this is an MLE tool so I didn't use it for data engineering and/or told off my data engineer". We have to be careful when we are being perscriptive because we could be liable and lose credibility as a community. If this is our "current world view open for discussion/growth" we invite discussion and contribution instead of enforcing our world view. Now that being said, we can 1000% defend our view point as we continue to gather data and understand how organizations do MLOPs with KF and not just let anyone reinvent the lifecycle, but still keep the door open in case someone does have something the community can discuss as a view point that makes sense to adopt or call out.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it's a great idea to call out that in practice the lines end up blurry between DE/MLE/DS for some orgs versus others.

I definitely welcome feedback and iteration on this! I think having this guidance is very useful though as it can provide a lot more clarity to the end user involved on why an MLOps team maybe recommending Kubeflow.

Andrej and I drafted this based on the CNCF diagram and modified it a little bit but, again, the language around personas across the industry is pretty fuzzy so I think sharing it with an asterisk is very helpful. It would also be valuable to hiring managers/executives that are trying to make staffing decisions but may not have the nuanced view of things.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I generally agree with these points @chasecadet, but again it is out of scope of this PR.
This PR just explains the value of Kubeflow components in ML lifecycle, and of course you can integrate other components from AI/ML landscape to your AI/ML infra.

We can always iterate and improve our architecture page if we agree with the Kubeflow community.

@vikas-saxena02
Copy link
Contributor

vikas-saxena02 commented Jun 9, 2024

@andreyvelich my two cents:

  • The architecture diagram under Kubeflow Ecosystem that we modified previously as part of my PR should be updated to include SparkOperator as other diagrams as part of this PR have it. I think the right big blok under Integrations is the best. Other Option will be to put it under Kubeflow Components
  • The diagram that have been added only show the ML Engineer persona and other personas such as Data Engineer and Data Scientist are not a part of any of the images. I would suggest either remove the ML Engineer persona or add other personas.
  • Model Serving box can be split into Model Serving and Drift Detection/Model Monitoring which is the key USP for kserve.

Happy to help with making the changes if you need some help.

@andreyvelich
Copy link
Member Author

The architecture diagram under Kubeflow Ecosystem that we modified previously as part of my PR should be updated to include SparkOperator as other diagrams as part of this PR have it. I think the right big blok under Integrations is the best. Other Option will be to put it under Kubeflow Components

We will include Spark Operator + Model Registry in this diagram once we make the first official release for these components.

@chasecadet
Copy link

I'm just adding some details here. I have a ton of content around the ML Lifecycle we can use from the course, and it's free. I own it. https://docs.google.com/document/d/1t2gTTQolI7DfLQJUbhSqd8bxhrIVqZOIU8dKGiTrHoo/edit?usp=sharing @andreyvelich @StefanoFioravanzo, feel free to take a look and see what we can use. I included model monitoring as part of serving and also mentioned model retiring.

@chasecadet
Copy link

also @andreyvelich keep me posted on this. I can update the course with our official ML lifecycle as well as updated architecture diagrams.

@andreyvelich
Copy link
Member Author

I'm just adding some details here. I have a ton of content around the ML Lifecycle we can use from the course, and it's free. I own it. https://docs.google.com/document/d/1t2gTTQolI7DfLQJUbhSqd8bxhrIVqZOIU8dKGiTrHoo/edit?usp=sharing @andreyvelich @StefanoFioravanzo, feel free to take a look and see what we can use. I included model monitoring as part of serving and also mentioned model retiring.

That's great @chasecadet, it would be nice if you could present it sometime in one of our communities call and collect the feedback.

@andreyvelich
Copy link
Member Author

@franciscojavierarceo @thesuperzapper @vikas-saxena02 @chasecadet @StefanoFioravanzo @hbelmiro @kubeflow/kubeflow-steering-committee I think, we can merge this PR if you don't have any strong objections.
As @franciscojavierarceo and I said before we can always iterate on our architecture page to better explain the value of Kubeflow components.

Copy link
Contributor

@franciscojavierarceo franciscojavierarceo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@vikas-saxena02
Copy link
Contributor

@andreyvelich no strong objection. Just another recommendation to add the CNCF paper as a reference.

@vikas-saxena02
Copy link
Contributor

/approve

@thesuperzapper
Copy link
Member

@andreyvelich While we can always make improvements (and I am sure we will in future PRs) this update is a significant improvement to the architecture page and I think it's worth merging now.

/lgtm

@andreyvelich you will probably need to approve this, as it needs a root approver given the number of files changed.

Copy link
Member Author

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! Thanks everyone for your review, and I am looking forward to share this with our users.
/approve

Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andreyvelich, franciscojavierarceo, vikas-saxena02

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-prow google-oss-prow bot merged commit 2bb99e7 into kubeflow:master Jun 11, 2024
7 checks passed
@andreyvelich andreyvelich deleted the ml-lifecycle-diagram branch June 11, 2024 20:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants