-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adoption SIG: Declarative Integration Working Group #68
Comments
2nd meeting is scheduled for Feb 28, 2023, 10:00 AM - 11:00 AM America/Toronto GMT-05:00 |
Notes from Meeting 2 (paraphrased)Action Items from Previous Meeting
Attendees
Discussion
Tom: What is the result of internal conversations about Microservice plugins? Patrick: The need to improve the frontend plugin system is known, but the team doesn't have enough capacity to change two parts of the Backstage framework simultaneously. The roadmap for Q2 is still pending, which makes planning new major projects difficult. Taras: Can we work on multiple tracks at the same time? The work on the Backend Extension system is a prerequisite to Microservice plugins, and the Backstage maintainer can support designing new features but not implementing them. Can we start designing the new Frontend Extension system while helping to finish the Backend Extension Migration? Patrick: There is a list of plugins that we have to migrate to finish the Backend Extension. There are some examples in the repository of previously migrated plugins. The biggest blocker is the migration of some of the core plugins like the Catalog and the Scaffolder. The migration of these plugins will likely identify gaps in the Backend Extension system that need filling. In addition, there is one other area that requires design. We need to implement plugin discovery that could support deploying plugins to Kubernetes and as a monolith. Patrick: Does the Webpack Federation framework require backend plugins to serve the frontend plugins? Tom: Not necessarily. It's just JavaScript - we can load it from anywhere. In OpenShift Console, we load the JavaScript from HTTP servers that serve JavaScript bundles, but a CDN can also serve them. Patrick: is there central orchestration? Tom: The host app defines extension points. Each plugin specifies which extension points it connects to via a manifest. Patrick: How would orchestration happen for users of the open-source Backstage project? Tom: Backstage will have to specify which plugins are available and extension points that plugins can extend. The app will determine configuration options for plugins. Tom: One of the benefits of using the framework that Red Hat created for the OpenShift Console is the ability to share frontend plugins between OpenShift and Backstage. A plugin created for OpenShift will be directly pluggable into Backstage. Patrick: I can see the benefit of having a shared framework. Tom: We're working on documentation for our framework, but we started pulling the code into repositories. Patrick: What is the developer experience of people building plugins? Tom: Developers don't need Kubernetes. We provide a container image that the developer can run with Docker, and they run Webpack locally. It supports hot reloading and works very well. Patrick: How would a developer run multiple plugins at the same time? Tom: You'd have to run multiple Webpack instances, but they don't consume many resources. Please pick a time for next meeting https://doodle.com/meeting/participate/id/bWPVRQEd |
Meeting 3: March 23, 2023Video Recording: https://www.youtube.com/watch?v=esjrmNgKO7Q Transcript00:06 Taras Mankovski Patrick, I need to give you probably permission. 00:15 Patrik Oldsberg Oh, I have all sorry. There's a bit of background noise from Automat. Automato? 00:25 Taras Mankovski It maybe I can use Automato. 00:32 Patrik Oldsberg There we go. Thank you. All right. I had opened this one, I believe. 00:46 Patrik Oldsberg Yes. You can see the backstage microsite now. I hope so. 00:48 Patrik Oldsberg I think I'll just kind of go and stop me if you have any questions or any want to take the discussion in any other direction. But we'll start out with just the tooling, kind of how it works, how it's set up, and also just where to find more information of it about it. I'll start actually in that end. 01:17 Patrik Oldsberg So on the microsite, we have the local development section, the cli. There's an overview here which introduces kind of glossary what we consider these terms to mean. I will be talking about bundling and building, for example, where bundling is kind of webpack, but we also do it for the back end. 01:38 Patrik Oldsberg It's the process of building something that is shippable and deployable into production, you could say, or typically production. Right? But a deployable artifact while building is building an individual package, typically for publishing that package to Mpm. And the bill system is kind of set up around those two main things overall. 02:07 Patrik Oldsberg So we have design considerations. We ship all of this through the backstage cli right, which is internally, of course, it uses webpack, it uses roll up for a lot of these things that we want to do it forwards to just for testing and is linked for linting. I'll keep it light on those bits and focus on the build tooling right now. 02:32 Patrik Oldsberg But we want it to be it's a monoripo setup. We want it to be possible to publish individual packages at the moment and to be able to ship things like that from Npm in a repo that is also used for other things. So it's kind of the system is intended to mesh pretty well between packages that are published and applications that are bundled and deployed. 02:37 Patrik Oldsberg We want it to be very scalable. We have a very large monorepo internally, we are going to be looking at ways in which we can break down these repos. But either way, scaling is kind of a priority to be able to support large repos. 03:17 Patrik Oldsberg And that just means that there are some design decisions where scale is a limiting factor. For example, we chose to have everything be one huge typescript compilation bundle, and that is because of scaling constraints. And when you run typescript sub projects, at least when we evaluated this a bit back, if you split things up into separate, smaller projects at the typescript level, the build times just skyrocket and don't scale, like you're saying, build times of 1020 an hour to compile the project. 03:58 Patrik Oldsberg Right. So we're trying to have a system where you can build both Isomorphic packages, front end and back end packages with one system. We don't necessarily and that's at the build system level. 04:09 Patrik Oldsberg We don't really have a runtime desire to run all of this at runtime at the same time at the moment. But the build system kind of somewhat seamlessly just functions with all of these different ways of building and using packages. Yeah, some other things like modern environments and so on. 04:34 Patrik Oldsberg One very core thing that we do and the developer experience in the repository is in the project is a high priority. We want it to be possible to essentially go in, write yarn dev and then start writing code and start making changes. So there should be no need to and especially with tight iteration loops as well. 04:59 Patrik Oldsberg Right, so we avoid building packages during development at all. We do all development straight from typescript source and we've set up both front end and back end to do transpilation on the fly of all of the mono repo packages without needing an intermediate build step. And I know at least when Backstage project was set up that was a fairly uncommon thing. 05:28 Patrik Oldsberg Many monorepos had the set up that you had to build the packages you could develop on one individual package but you had to build the packages to try them out as a whole. So that's one of the things that we very much wanted to avoid and I still consider a constraint that you should be able to just go into the repo and start development without a lot of ceremony to get started. Is that helpful so far? Is there any direction you would like to point me in? Otherwise I will just dive a bit into the tech stack, I think of the Cli. 06:16 Tom Coufal Let's dive into the text stack of the Cli. I think for a general generic overview, this is good. 06:27 Patrik Oldsberg So let me think for a second about the best place to look at this. I think we will just jump into a bit of code into the Cli package and just kind of see the setups of these things because as I mentioned, it's basically webpack and roll up that we use and webpack is responsible for the bundling. I think I'll leave back end out of this completely for now because it would just complicate the discussion because we do use webpack for other things there right now. But I'll just focus on the front end and what the front end setup looks like. 06:27 Patrik Oldsberg So yeah, it's divided into building and bundling. As I said, building. That is when you want to publish a package. 07:20 Patrik Oldsberg It is not used for anything else almost ever. So the foundation for that is roll up. Let's see what the configuration setup looks like. 07:20 Patrik Oldsberg It is really not that large. So we have a setup that produces a roll up config and it's tied into the kind of primary way in which we configure the packages. Like we generate a roll up config is this package role that we have for every package in a backstage monorepo. 07:26 Patrik Oldsberg So let me see if I can find that. Sorry, I'll jump over to documentation again. Here we have the different package roles that exist and this is something that you declare in the package Json. 07:26 Patrik Oldsberg So you can say that the package is a front end, which is like the app a back end again, entry point, the thing you deploy cli, cli tool. And then more interestingly here for the building different library packages, it is only these packages that actually that building is meaningful for shipping the cli as well. But these are library packages, so to speak, where you build something and you can publish it. 07:26 Patrik Oldsberg So you can see we have for example, web library, node library and common library. So jumping back to the role of configuration, there's logic that just determines, okay, we have a web library, that means we need to only produce ESM and then we set up an ESM build. There's a logic to say if it's a node library, we only need common JS for now and this is why we have this indirection one of the reasons, so that it's set up to be able to evolve as the ecosystem changes and we start producing ESM for backend packages and so on too. 09:15 Patrik Oldsberg Other than that, we have a kind of fairly hard coded list of just file extensions and transforms that we have for different types of assets. We keep those in sync between the webpack config and the roll up config manually and that's how we set that up. We target a typescript or Ecmascript version that we bump every now and then in accordance to our node JS release policy. 09:43 Patrik Oldsberg Really, as node tends to be the one that is the furthest behind, browsers tend to be far ahead. Yes, so that's the roll up configuration it produces can't show this here, but for any package it's going to produce output in the disk folder that's separate it will produce common it can produce common JS, ESM and type definitions as well. So that's an important part as well. 10:16 Patrik Oldsberg Of course for any package that has an external typescript interface, it bundles up the type definitions of the package as well and make sure that's available. Webpack wise, I don't know, it's fairly straightforward. There's some trickiness we do because of Monore for development where we make sure the resolution happens correctly when you're requiring modules from other packages in the back end space. 10:49 Patrik Oldsberg But front end wise it's a fairly common webpack setup that we just have some extra added some flavor to it. For example, the config injection and this is important of course, the chunk splitting. This is one of the reasons we are still using webpack, I would say. 11:11 Patrik Oldsberg I really do want to switch over to something like Esbuild, but webpack has very good chunk splitting being able to get the dynamics chunks out. We also very aggressively reduce the modules down into smaller chunks to be able to kind of cache them aggressively in the front end so that we don't have one big vendor bundle that as soon as you change a single dependency, the entire bundle gets invalidated and you have to download it all again. We split it up into way smaller chunks of dependencies so that when you just bump React, for example, only react needs to be downloaded again. 11:55 Patrik Oldsberg Transform wise. We use swc. We used to use Sock Trace, we use Es build in tldr of the transpiration processes. 11:56 Patrik Oldsberg We use whatever is the best tool at the moment for what we need and we've switched around a bit and we'll see if Swc ends up being it's really good right now. Hopefully we stay there for a while. We still use Es build in the roll up piece because it produces nice code and is very fast. 12:28 Patrik Oldsberg But Swc of course is the only one that has react fast refresh support. And we stay far away from bubble for example, because it is very slow scale. I think the very last thing to cover that's important is the publishing step. 12:52 Patrik Oldsberg So I mentioned that in local development we do development straight from source, right? And that means that oh, I'm looking at a bad package here. Here we go. Config package for example, main types, all of this points to Source. 13:07 Patrik Oldsberg This is of course not available in the published package or if it was you wouldn't want to use it. So these fields are rewritten when packages are published and that is done in the prepack script. So essentially this is going to get pointed over to the disk folder instead types as well. 13:07 Patrik Oldsberg This is currently done through this in Publish Config. The fields here is how you configure it. We are kind of slowly moving over some packages at least to use exports instead. 13:39 Patrik Oldsberg And there rather than having it in Publish config we rewrite these export fields to point to disk with appropriate import required type default flags here within the exports field but still develop from source. That's the main thing for the build system. The just setup is basically just a mirror to make all of this work as well. 14:08 Patrik Oldsberg Basically staying so that we can test the same sources we are building and bundling. So, anything there? How are we for time? Should I dive into the front end system? 14:23 Taras Mankovski Yeah, let's do the front end system now. 14:27 Patrik Oldsberg I will keep it brief. Let's see where we have information. So plugins composability system and yeah, as I think I mentioned before, we do want to give the front end system some more love and we'll be able to I think especially now that we've overhauled kind of the back end system. The front end documentation is a bit leaves some improvements to be made to it. 15:07 Patrik Oldsberg But the current frontum system is built around so it's plugins. But extensions are really the most important thing. So this documentation that you find under Plugins composability system that describes all of it jumping into some actual code with examples to build a backstage app. 15:32 Patrik Oldsberg Everything is in the app package. This is what you build. This is where you wire everything together. 15:33 Patrik Oldsberg This is a ton of imports of different plugins. And what we're importing here are extensions from these plugins. So we call these things extensions. 15:45 Patrik Oldsberg It's for example, the Explore page, it's the graphic Ul page. Here we're seeing a lot of pages as we dive into different parts of the app, such as the Entity pages, which I won't dive too deep into, but here we have Entity has components card, for example. These are all extensions that are being imported from plugins. 16:06 Patrik Oldsberg The idea is that the plugins export a number of extensions and the Integrator, who essentially writes the code in this application decides what extensions they want to import and install into their application. And as you're installing these extensions, you have to put them somewhere. So for these pages, you install them at a route. 16:33 Patrik Oldsberg You can also for some of them it's just a very plain kind of you just put the page there and that's it. There's not much to configure. For some of them we see the Tech Radar page here, for example, there's a little bit of lightweight configuration and then for other ones there are kind of a lot more configuration that you can do. 16:53 Patrik Oldsberg Like the Scaffolder, you can pass in additional custom field extensions that are used in the Scaffolder UI and so on. And this is why we call it a composability system because we're composing, we have an app where we're adding the Scaffolder plugin or an extension, a page extension from the Scaffolder plugin. And with that page extension in turn provides extension points for other plugins to provide field extensions that can extend the Scaffolder plugin. 17:25 Patrik Oldsberg So, sounds complicated, but essentially it's just stacking stuff on top of each other and composing pages and features from various plugins. But right, an important thing there that we want to maintain is the fact that a plugin is not simply a page. It is a page that can in turn expose additional extension points for other plugins. 17:53 Patrik Oldsberg Yeah, what it looks like on the other end. Sorry, did someone have a question? 18:03 Martin Marosi I had one question. So, if an extension is giving extension points to other plugins, how is the dependency management then package? Jason. 18:23 Patrik Oldsberg What typically happens is that you have, for example, let's say Scaffolder, you have the Scaffolder plugin who exposes the and that is a package. So there's the Scaffolder front end plugin package that exposes the Scaffolder page. Then there is an accompanying package, which is the Scaffolder react library. So it's the front end library for the Scaffolder. 18:48 Patrik Oldsberg That's a separate package that other plugins who wish to build upon the Scaffolder can install. So there's no direct dependency on the Scaffolder package. If you want to build field extensions, you're instead, depending on the separate web library that the Scaffolder provides, and then that is all wired together in the App. 18:55 Patrik Oldsberg So the App is going to have the dependencies on the different field extensions that you want and of course, the Scaffolder plugin itself. 19:18 Martin Marosi So extensions don't know. Okay, just let me recap. Basically, App knows the dependencies between extensions and not the extensions among themselves, essentially. 19:32 Patrik Oldsberg And here's under Overview Architecture. Overview package architecture. This describes the relation between the different packages, the types of packages that exist. So you have, for example, the different plugin, plugin back ends, and then the library packages and how other packages can build upon this. 19:55 Taras Mankovski All right, time check with other things. What would you like to cover? Let's do maybe a few more minutes just to make sure we give Red Hat folks enough time to talk about theirs and then leave some time for action items at the end. 20:13 Patrik Oldsberg Yes, there's really one piece of code, if this would load that I just want to show. And then unless there are questions, I don't have anything I want to point out at this point at least. 20:27 Taras Mankovski Well, mention the API refs. Is that worth mentioning for the front end framework? 20:35 Patrik Oldsberg We can do that. Another one to point at. Utility APIs. That's a good one. 20:43 Patrik Oldsberg Thank you. So, hopefully yeah. So Utility APIs, it's kind of a it's a solution for more non visual bits of plugins. 21:00 Patrik Oldsberg But how do I explain it? It's it came out of the Backstage core framework, providing a lot of Utility APIs for plugins to use, I'll admit fraben for plugins to use. So it's stuff like the Auth APIs, the error log, the error like, notification thing, various different APIs that you would use. There's a general pattern of building these APIs that we have internally. 21:22 Patrik Oldsberg So it's shared APIs for plugins to use. They are exported as interfaces, and they are API references that you use to access the concrete implementation of the APIs. It's a decoupled from React, but you can consume them in React using hooks that we provide so that you can access these APIs. 21:59 Patrik Oldsberg Plugins can also provide their own APIs, and it is fairly common that they do that. So the Scaffolder, for example, has a Scaffolder API ref here. It provides that as part of the plugin. 22:11 Patrik Oldsberg That means that components within the Scaffolder can use that API to implement their functionality. If the APRF is exported from the plugin, if it's internal, it's just an internal utility. If it's exported from the plugin, it means that other plugins can potentially rely on this API and use it themselves. 22:32 Patrik Oldsberg And it also means that you can override this API in the application so that you can either decorate or completely replace the implementation of the Scaffolder API with your own implementation in the App. And that is a fairly important customization point. In some cases, we're constrained for time. 22:55 Patrik Oldsberg So I'll point at just how these extensions are created that I mentioned before. So you have, for example, the Scaffolder page. You can see here, it's a Scaffolder plugin. 23:06 Patrik Oldsberg It provides a Routable extension, which we create here. There's a small API for what it is. In reality, it's very much just a react component that we couple with a framework. 23:21 Patrik Oldsberg We just hook into the point of exporting this react component, and we're able to decorate it with additional stuff like the plugin context, error boundaries and so on. So this is a nice container for the component that is being exported. Components don't have to be sorry, extensions don't have to be react components, but they very often are, and I'll leave it at that. 23:50 Taras Mankovski Let's do two minutes of questions and then Red Hat folks can take over. Any other questions before we transition to the other topic? 24:01 Sam Padgett Hey, Patrick, I was curious if plugins are exposing APIs for other plugins, if plugin needs to make a breaking change to an API, is that handled? Is there any way to depend on specific versions or any way that's managed? In Backstage today, you're not allowed to. 24:24 Patrik Oldsberg Break the consumption of Aprfs, the consumption side of the interface. If you want to do that, then create a new Apr, unless, you know, the plugin is in an experimental phase. So you would just do a Scaffold or V two APRF. It is fine in our version policy to break the producing end, it's fine to break the app because the impact of not allowing that is bigger than not allowing it, because it actually makes the consumption just way more complicated. 24:36 Patrik Oldsberg So if you want to add a new method that's required, you can do that. And that, of course, won't break consumers. 25:08 Taras Mankovski Any other questions? 25:15 Martin Marosi If I didn't understand but the point. 25:17 (Speaker F) To say I would love. 25:24 Patrik Oldsberg Okay, I'm hearing Frivolous, I think. 25:28 Martin Marosi So my class starting on the bus service. 25:32 Taras Mankovski Should I mute? 25:34 Patrik Oldsberg You could do that just in case. Okay. 25:37 Taras Mankovski All right, folks, so let's switch over, I think, Patrick, who would like to present from Redhead site. 25:48 Tom Coufal I'm going to say a couple of words and then I'm gonna hand over to Martin. So can you present? 25:58 Martin Marosi Yeah, Martin. 26:00 Taras Mankovski Oh, yeah. 26:00 Patrik Oldsberg Okay, here we go. Martin. 26:05 Tom Coufal So, as we've talked about before, we would like to introduce some way of adding extending the front end and using new plugins. Removing plugins, adding them, upgrading them, whatever that lifecycle management of plugin means through configuration, not through code changes. And why we want to achieve that. Essentially, we want to provide an easier way to maintain Backstage images for us if we have a diverse set of users of instances and we don't want to maintain 20 different images for 20 different instances of Backstage, how we sat on a way, like how to approach this, how to handle this. 27:10 Tom Coufal We've turned our attention to other projects in her head who utilize a similar fashion, who solved this through webpack module Federation and very recently we've broken off this extendability piece into a separate framework called Sculptrum. And what we did since our last meeting is that we've put together an very rough RFC which is still in work in progress. I've just filed it before this meeting so you didn't have time to look into it. 27:49 Tom Coufal Sorry for that. You can find it here. And what we also did is a short POC showing you the capabilities, how we think of a way to load remote components, remote code through webpack module federation and how we can talk how we got there. 28:16 Tom Coufal Talk about how we got there, what we did to achieve that, and what challenges are still ahead of us to solve, to make this fully functional for full backstage consumption. If you have any questions during the demo or any time, just feel free to jump yes over to Martin. 28:43 Martin Marosi Okay, before I start showing the demo, I'm not sure how familiar you are with Module Federation. Should I do some two minute crash course or can I just jump in? 29:01 Taras Mankovski Two minutes might be good just to level set everyone's understanding. 29:07 Martin Marosi Right. So I know that you mentioned you want to move away from Webpack. Well, we have locked ourselves into Webpack a couple of years ago, but basically what the Module Federation allows is this is super simplified description, but you can imagine it as give the ability to developers to import modules that are not locally installed on your machine anywhere. That's very rough description, but essentially Webpack allows, let's call them different submodules which live on different remotes to be pulled into the browser at Runtime or node environment at Runtime. 29:55 Martin Marosi And in addition to that, it provides a module sharing capabilities so you can get around potential issues. Downloading the same module ten times into the browser. 30:10 Patrik Oldsberg Right. 30:11 Martin Marosi You can imagine every module depends on react, but there is no point of downloading the react asset all the time into the browser. So with the remote assets and the module sharing, you can essentially allow I don't want to say compile or build assets at Runtime, but technically you can import remote components at Runtime that have been already compiled and essentially create an illusion that you are running a monolithic UI, even though all these assets are being built, deployed and developed separately. I think that's kind of the very short description of Module Federation. There is a lot of details, details when you want to learn it and use it and unfortunately, develop documentation does not do a very good job of documenting the concept. 31:18 Martin Marosi But there is a lot of examples fortunately, that can help you figure out what they mean and basically what we did in the other products, we have our hybrid cloud console, front end basically running on this. It's a microfront architecture. We have over 60 different micro applications with many more remote modules and we have been successfully running on this for the last two years and it has gave us the great opportunity to develop and scale very rapidly. 31:50 Martin Marosi Whereas I can't imagine how quickly we would be able to achieve this having some sort of monolithic architecture, I guess or package based architecture. So as Tom said, we have basically managed to create a simple POC. I'm going to share my screen and show you what we did. 32:16 Martin Marosi Can you see my screen? Good. So what we did is we took the sculpt from framework and added it into the backstage front end code with all the configurations and everything. And what you're looking at right now basically we took this rate, this entity component and on the left side it's being loaded as you're used to and on the right side it's basically being pulled from a remote CDN if you like. 32:51 Martin Marosi We have just running a local Http server running locally. And basically, you can imagine if I would now change this component entirely, I wouldn't have to run the whole application built. I would just have to build this little module, deploy it to wherever it's supposed to be deployed, and a user would just have to hit the refresh button somewhere in production, and suddenly they would get this new version. 33:22 Martin Marosi So I'm going to show quickly how it looks like in the browser and what the browser does and then I can show you some of the code and how we achieved this integration. Right, so if I just hit refresh it's going to do things a bit and you might not have noticed but there was slight delay before the component was rendered. And what basically happens is there is this manager of these plugins. 33:55 Martin Marosi Every single plugin has a manifest like this. It basically has some metadata required for the plugin manager to initialize and render the component. You can see it has some base URL and some load scripts entities. 34:12 Martin Marosi This script is basically the entry point to the module itself. And once the script is added into the browser memory it initializes the container, it exposes the component and you are then basically able to load these modules. The plugins store provider here is essentially the manager and it has lists of all plugins. 34:40 Martin Marosi We only have the one backstage plugin and this entry module here. These get an init functions you can imagine as this is what you get when you write import component from B. Right, so basically this get function, this is the module itself. 34:58 Martin Marosi If I were to store it in the browser I can just call it how did we name the function? Like dislike button. Dislike button. 35:18 Taras Mankovski Missing e. Thank you. 35:23 Martin Marosi No, it's probably a different name. I can just quickly scan for it. There it is, buttons. Yes, sorry. 35:36 Martin Marosi And if you have ever played with a there we are. If you have ever played with Virtual Dome or anything, you can see that suddenly we have these components and they are basically just the react components themselves. So I have successfully imported component into browser at runtime without the module being available at my node modules. 35:36 Martin Marosi And all our applications in the hybrid console are basically these little submodules. They are not just applications but individual components are exposed as well. There is also a concept of extensions when I can say okay, I want all let's say pages with this ID and suddenly the browser would load all of these extensions. 36:38 Martin Marosi You can for example imagine all of these tabs as extensions, which I believe is probably quite similar to whatever extensions you were talking about earlier. So that's the main concept plugins and extensions can be developed, built, deployed at independently. There is no global build of anything, there is only a host application that manages these plugins and loads them to the browser and then through configuration files we are basically setting up the environment. 36:49 Martin Marosi So we have one code that runs on multiple different environments and these environments are being customized and modified based on configuration, not based on code. This is something that Tom was talking about. So how did we achieved the integration with Backstage? Thankfully, because you are using webpack build, it was quite simple. 36:49 Martin Marosi So apart from installing the dependencies et cetera, the main thing that you have to do is you have to set up the module federation. Obviously there are two types of module federation setup that has to be done. One is for the host application, for the application shell for the module manager and one being for the modules themselves. 36:49 Martin Marosi So for the application manager it's extremely simple. You just have to create a module federation plugin into your webpack, add a module federation plugin into your webpack config. So these few lines essentially created the plugins and then we just pushed it to the existing array. 38:31 Martin Marosi We haven't changed anything else here. Everything else stays as it is. And because of how react works and how the react specifically the react context works, you also have to make sure that you share some modules. 38:49 Martin Marosi The most important modules are obviously react and react. Dom. And then we have these two modules scalp from React Core and openshift dynamic plugin SDK. 38:59 Martin Marosi Now, just a note, we are in a kind of transition phase but we are planning to move the dynamic plugin SDK into the scalp room organization sooner. And another thing that you have to do is mark them not only for module sharing but also as a singleton. And the case for that is if I build my application independently, there is going to be an issue that it will not have the access to the same instance of react context as the other independent plugins. 39:30 Martin Marosi So webpack thankfully gives us this option to mark Amojo as a singletone which allows sharing the context, the instance of the package between all of these modules that live in the webpack share scope. So these are the necessary steps to enable it. In the host application, in the shell application which was here, as I said, new plugin and added into the plugins array. 39:58 Martin Marosi And then in the plugin itself we have this dynamic remote plugin that comes from the Openshift dynamic Plugin SDK. It should mostly use the same Share modules. It has to use the react, reacton scalper and SDK modules as singletone and share them. 40:19 Martin Marosi Anything else depends on the environment and the requirements. And then we have some plugin metadata which are defined as a simple Json. Not sure where source code is. 40:36 Martin Marosi We have tried to add it to the Yaml file, I believe, tom, right? 40:39 (Speaker F) No, this is in the plugin and. 40:40 Martin Marosi Packages all right, in package Json. Yeah, we added a plugin metadata sculptrom. What package Json? So basically you give a name to the plugin for the spec implementation reasons. The name cannot be the same as this name here. 41:12 Martin Marosi That's because some specific limitation of the container naming you cannot use some characters for remote containers. So that's why we cannot use the same name. And then you essentially tell what exposed modules are in the remote module in the remote library. 41:20 Martin Marosi So right now we have only like and Dislike button but you can list 50 or however you want in the configuration itself. Again, there's also the concept of the extensions. We haven't showed it right now because it takes a little bit more time to set up and demo and we didn't know how much time we would have. 41:52 Martin Marosi But what this means essentially I am now able to reference Dislike and Dislike buttons wherever I want in the host application and also not only in the host applications but also in the plugins or plugins themselves. So it's not a flat structure. The like and Dislike buttons can also consume modules from other plugins, et cetera, et cetera. 42:10 Martin Marosi So how I can access my plugins or how I can enable how I can enable the Scalp Room plugin manager? In your host applications or host application you have to set up the Scalp Room provider. This calculum provider is just a simple react provider. You are probably used from many other modules and it requires only one, only one configuration option which is a configuration file. 42:58 Martin Marosi And it's not nicely here, but essentially if we can look at the raw structure it's a Json object which acts as a registry for the plugins themselves and how it looks. It's a very simple structure. You have to give it some name and you have to give it a manifest location. 43:23 Martin Marosi The manifest location is the entry into the remote container itself. 43:31 (Speaker F) Can you show how it's defined in the app configmo? 43:34 Martin Marosi Right. So we have tried to try to use your existing existing files and basically this is how you would define your dynamic plugin in the app config as you have it right now. We try to make it as simple as possible in our environments. We use this object to also define some additional information, some routing data, some instance specific properties, et cetera. 44:11 Martin Marosi But at its core it can be as simple as this simple name and location of the manifest which holds the remote container entry metadata. So this is how you define it and configure it your plugins. And then we have this react binding simple react component requires two attributes scope is the name of the plugin and Module is the actual module that you want to use from the plugin itself. 44:42 Martin Marosi There are some additional props for the component like error handling, loading, et cetera. But any non sculptor specific props will be then passed down to the component itself. So the component doesn't have to be an enclosed react component or anything like that. 45:02 Martin Marosi It can define its own Npi and have required arguments. But in this case we used existing like and Dislike buttons module which didn't have any interface which was just fine. 45:16 (Speaker F) For us just to piggyback on that this calculum component itself as you can see, not everything is happening through configuration as of now, right? So our end goal is to also be allowed to instantiate these sculpt room components from the remote locations through the app configural, which we want to achieve through user configurable mount points. Basically allowing user to specify mount points in their front end application, like saying I want a new tab in the entity page or I want a new menu item in the sidebar rendering a completely new page and things like that. And that would dynamically load those mount points from the app config yaml. Through this copy component. 45:43 (Speaker F) Basically just iterating over whatever components you list in the app config. 46:12 Martin Marosi I know we are running out of time so the last build, last point that I want to show is the veto configuration because we have to add the vector configuration for the plugin because as far as I know, and I'm very experienced with Tech Stage, I saw it for like 3 hours. You are building the plugins with roll up which obviously roll up and webpack will not be compatible so we have to add a simple webpack config for the plugin itself. We tried to reuse as much of your existing configuration as possible. You have mentioned before the chunk splitting. 47:06 Martin Marosi Now the module federation is probably even more aggressive than your current setup is and that's because of the module sharing. So every single shared module is basically its own vendor package, if you will and it can go very deep even on a component level. And that's what we are trying to achieve at Redhead that we want every single component to be its own package. 47:15 Martin Marosi So the network overhead is minimal between the shared modules across the different plugins. But as you can see again we used whatever you had for your configuration and then just added the dynamic plugin into the plugins array and run a simple build and it worked then we just had to just simply host the assets with an extremely simple Http server. And we have now everything working locally. 48:10 Martin Marosi If I just go in, add some text here, just rebuild it and we host it and it's going to work. Right. And this is how you can imagine a dynamic plugins would work. 48:30 Martin Marosi The core application would stay the same, plugin separated plugin would be built, would be deployed, and as soon as it is deployed, then the core application shell application would have the updated version of the plugin without any project wide builds, deployments or anything like that. So you can be very granular when you're working with your plugins, I think I haven't saved it, so it's not going to be rebuilt. 49:04 (Speaker F) In addition to that, what we're adding as an overhead for the plugin maintainers plugin developers is fairly minimal. The only requirement there is to specify that key in the package Json and list of the exposed components in there. Right. Nothing else is required from the plugin developer to do no code changes anywhere. 49:35 (Speaker F) And that comes from Vivapack Module Federation requirements that exports needs to be default. It needs to be a default export on the components. 49:44 Martin Marosi Right. 49:44 (Speaker F) You can't federate something that is not a default export. 49:49 Martin Marosi Well, that's more of a react lazy limitation than Module Federation. And we are using you can see that the stuff was added here. It was added here as well, because you mentioned you don't have to rebuild stuff during development. But if you would imagine this as a production environment, this entity on the left would still be the old version, this entity on the right would have the new version because we don't need the root application. 50:14 Martin Marosi Rebuild it. 50:15 (Speaker F) And just to back to the build process, to what needs to be done on the plugin developer side, basically just the presence of this call from key in the package json adds another build task for that particle package. So the particle package plugin is built in two different ways. One is the standard roll up build building into the list folder, and the other one is a webpack build building into the disk scalp room. So you can still use the plugin both ways. 50:44 (Speaker F) You can host it as a micro front end from the disk scalper and you can install it or publish it as like a normal backstage plugin through the roll up config as you may be used to. 51:01 Martin Marosi Yes, I think we are out of time. 51:04 Taras Mankovski We're out of time. I actually have to go, I have another meeting starting right now. But I made Patrick the host, so Patrick, can you close off the meeting? I think we just need to figure out there's questions and then if there's any action tasks to take and we can set up a follow up conversation for VMware folks to show their work and then figure out what the follow up actions will be. 51:35 Patrik Oldsberg Yes, sounds good. I can stick around for another 15 or so. Is that all right, folks? 51:44 Taras Mankovski Okay, I'll let you folks continue. It should continue recording until the end of the meeting, so I'll watch the rest afterwards. 51:54 Martin Marosi Okay, bye, everyone. 51:56 Patrik Oldsberg Thank you. Bye. Thank you for running through it. It gives me a lot more context, I think, in my head, the two trickiest bits of this is the composability and how you configure it, how you provide a good experience there and a good system for how that works. 52:34 Patrik Oldsberg And the other thing is. 52:38 Martin Marosi Go ahead. 52:40 Patrik Oldsberg And the other thing is the federation system or the build system in combination with a good local development experience. 52:54 Martin Marosi Right. 52:55 Patrik Oldsberg Also including when you're, like, building a plugin in a repo that is separate from the main backstage repo. Yes. 53:02 Martin Marosi So this is something that obviously we had to tackle on our side. So, specifically, what we do in the hybrid console, we essentially have a development environment. 53:12 Patrik Oldsberg Right. 53:13 Martin Marosi And then if you want to develop a plugin, actually want to develop it within the environment, we have a just local proxy setup which hooks into the deployed environment. So everything because the application is huge. Right. There are APIs, multiple front ends, and if you were to run it everything locally at the same time, you would probably need a spaceship to run it. 53:43 Martin Marosi It's fairly resource heavy to run the whole console, so that would not be feasible. So what we have, we have this development environment which everybody is developing against, and they can run their UI locally and they can develop their specific piece. Now, within the development proxy, we also have the option, so we can run multiple different Uis at the same time. 54:06 Martin Marosi You just have to tell your environment. Whenever I want to run Plugin A, it's available at localhost, I don't know, 8005 or whatever. So that's how we handle the development. 54:25 Patrik Oldsberg Sorry. Does the plugin then get kind of injected into the production application effectively, like replacing what is currently there with your own modules? 54:40 Martin Marosi We are basically rerouting the requests to the assets to the local machine. 54:46 Patrik Oldsberg Right. Because you're visiting the main app, but then instead of going to grab the production version of your plugin, you grab the local development one. Yes. Makes sense. 54:58 Patrik Oldsberg Cool. And that still is well, I guess that's fairly does that work with React Foster Refresh, for example? 55:07 Martin Marosi Yeah, it does work, yes. 55:10 Patrik Oldsberg Because it's effectively serving a development bundle from there rather than a built thing. 55:16 Martin Marosi Yeah. The webpack dev server is still hooked into the browser. 55:19 Patrik Oldsberg Right. 55:20 Martin Marosi So instead of running from the actual root of the application, the fast refresh essentially starts at the application entry point. 55:31 Patrik Oldsberg That sounds like a nice solution. It's still, like, complicated to set up, but it sounds like a good solution that can bring at least similar, really better development experience. If you're considering a large backstage repo, we have to weave the back ends in there as well. But that's kind of simple when you already have a proxy locally. 55:54 Martin Marosi Yes, and for the composability. So we are kind of in transition period when it comes to composing the environment itself. So in the past we had these static files, configuration files, and we essentially defined this is the list of all available, let's call them micro applications, not just modules, because micro applications can also expose modules and extensions and et cetera. So we have this list and in this list each module has its own entry. 56:30 Martin Marosi And within the entry we say, okay, module, micro application A can be on route B-C-D. It's not limited to just one route, it can be on multiple routes and each route can use different entry point from the micro application. And it also can have some different configuration. 56:56 Martin Marosi One set of the configuration attributes is for the shell application, the other set of the properties is for the component entry point itself. So you can imagine it as some sort of initial props of a component whenever it gets mounted, right? So that's the past. Now we will stick with this format, but before it was static and now we are moving into a Kubernetes environment where every application will have its own custom resource. 57:07 Martin Marosi So instead of having a central repository of this configuration, each application will own its own piece of the configuration. And during the build with Openshift operators, we will compose this main configuration file and then serve it in the environment itself. 57:54 (Speaker F) So what it means for what all of this means for our backstage experience is kind of different, right? Because we can't limit ourselves just to like these Kubernetes viewpoints are the foundation. 58:13 Martin Marosi Strategies our ability to meet sorry, good. 58:23 (Speaker F) So we don't want to limit ourselves just to Kubernetes deployment because we know that all backstage users are not just Kubernetes users and people are running it in different fashions. So we want to preserve this like of a static configuration type of experience that everything is configured through the app config Yaml. And since we know that each backstage instance can look completely different to another backstage instance, right, offer some sort of a mount point logic and helpers that like the integrator who's creating the backstage instance can define what mount points he uses and where he uses them. So imagine like in your entity page, you put like a listener or fetcher for this particular mount point saying basically import me all the components that I have in config file for this particular mount point into this particular place. 58:45 (Speaker F) And in different page, in different part of the Dom, I can have a similar thing for a different mount point which kind of gives the integrated freedom to specify the mount points himself. But also once those mount points are specified and solidified, you can dynamically change what's displayed on those mount points through the app config. 59:59 Patrik Oldsberg And there are just some weird small, tricky bits that I find. How do you for example, do you remember the tabs on the entity page? 60:13 Martin Marosi Yes. 60:13 Patrik Oldsberg How would you handle ordering of those tabs? 60:20 Martin Marosi Maybe Sam can answer that question because they're actually using that in Openshift. 60:27 Sam Padgett Yeah. So the way we've done it, at least for the left NAV, so we have a left navigation with various menu items organized by section. And this isn't perfect. We have, like, insert before insert after properties. 60:43 Sam Padgett We define that extension point. So you can say, I want to add this NAV item and insert it before this other one. You still have to figure out, like, if two plugins are trying to insert an item before the same standard NAV item, like, which goes first? So we fall back to just like alphabetical at that point, like if there's a tie between them so that you have stable ordering, so that if you refresh the page, it doesn't start flipping back and forth. 61:16 Sam Padgett But, yeah, that's the approach we're taking to this point. So I think for tabs, we could do the same thing, potentially say, I want to be inserted before or after this tab. 61:30 Martin Marosi So what we did in the Hybrid Cloud Console is that we give the individual teams and micro applications the ownership of their configuration, including, for example, navigation files. But then the shell application itself controls where these pieces get inserted. Right. If that makes sense. 61:30 Martin Marosi So let's say that the application says, I want my navigation item to be visible, and whether it's expandable, static, whatever, that doesn't matter at the moment. But they tell us, okay, we want to be in section A, right? And the shell application itself has some sort of templating for the section A and inserts based on the IDs inserts the item at its correct place based on where the product management and user experience teams have decided that the application should live. 62:36 Patrik Oldsberg Sorry, just conceptually from the point of view of the one configuring, the layout or the integration of all of the plugins, are you taking extensions from plugins and putting them on mount points, or are you saying, I want this plugin, and then it will automatically go on the mount point that it knows it should be in? 63:00 Martin Marosi Yeah. So part of the again, not talking about hybrid cloud codes on a tokenshift, because that is slightly different. But in Hybrid Cloud Console, the micro application itself tells us on which routes it is supposed to be hooked on and then we take. 63:22 Patrik Oldsberg Because it's kind of unique, but if we're dealing with. 63:25 Martin Marosi Composing pages, well, that we are giving this freedom to the micro application itself. So if I want to use a module from different micro application right now, I have to specify where I use that application and when I inject it into the front end. That is Hybrid Cloud console. We haven't fully embraced the extensions. 63:52 Martin Marosi The extensions part of the scope room, which are generic extensions say, I want tabs, so it will give you all the tabs. That's the part that's actually coming from the Openshift Console in that sense. 64:08 Patrik Oldsberg Because I know early on in or like what the internal version of Backstage had before Open Sourcing was that it was all plugins and plugins with pages and then the plugins that where there was composition going on, essentially had a plugin registering a page and then just importing from other plugins more content that they needed. Is that fairly similar then that you have a microplication that's a page and then they kind of just grab? Yes, that is something that we had to unwind for the open source, because all of a sudden this page is no longer something that you can just give someone and assume is correct for everyone. You actually need to let the integrator pick what should appear on that page, which is particularly relevant for the catalog. Of course. 64:57 Sam Padgett In Openshift Console, we've done this by defining the different extension points to say, like, here are the various places in the UI that you can extend. And then the plugin says, I have an extension. I want to use this particular extension point, whether it's like the NAV or I want to add a new page, or whether I want to extend an existing page, like I want to add a tab to this specific page or a car to this dashboard page. 65:23 Patrik Oldsberg The question that pops up in my mind is how far do you want to go with abstraction? Right. Do you want to say that you have extension points that have various types, like supported types, you have extensions of various types, and the ones that are compatible can be matched together by the integrator. You can go pretty far and potentially overkill with how much you encode into the system, I guess. So that's like tricky design decision or. 65:52 (Speaker F) Is it more about elsewhere? Are you asking about how we should approach it in Backstage or how we. 66:05 Patrik Oldsberg Use I'm just thinking about this kind of system and how it would well, what it could look like in Backstage. There's just different things to do. 66:18 (Speaker F) I don't think in Backstage we can go this far that the plugins decide where to display themselves just simply because of the fact that each Backstage instance is different. You can have a catalog mounted on a different route and that can break all the extensions deciding that they want to be displayed on this particular route. 66:42 Martin Marosi Right. 66:46 (Speaker F) At least that's what I'm thinking about. We want to give this power to the creator of the Backstage instance to decide where they consume given mount points. And these mount points are just key names from the app config Yaml, where you specify what's to be displayed on that particle mount point. So the creator of the Backstage instance decides where the mount points are, and if you're configuring it and adding new plugins, you don't need to change the code, change the react components, the structure there you just add a new entry into the mount. 67:30 Patrik Oldsberg But I would want like plugins to be able to have mount points themselves where extensions can be installed. 67:42 Martin Marosi We don't see an issue with that. 67:44 Patrik Oldsberg Right? 67:45 Martin Marosi Yeah. I think this kind of comes hand to hand just placing a lot of trust into the plugin developers. 67:55 Patrik Oldsberg Right. 67:55 Martin Marosi Because suddenly they have this power to essentially inject a whole new application into their plugin without the let's say they. 68:04 Patrik Oldsberg Can already do that. 68:06 Martin Marosi Yeah. 68:08 Patrik Oldsberg Doesn't matter. Yeah, no, there's definitely some guardrails that might be needed in some places. Okay, I think it's time to wrap up, right? It's beyond time to wrap up. So one thing is actually from our end, from maintainer and also spotify end, it looks like this is getting on the roadmap and that this is like the overall removing the need to change code to install plugins is something we'll be able to prioritize and work on. 68:49 Martin Marosi Yeah, that's great because we've got notified, tom told me and Sam about this requirement. We went all gas on properly open, sourcing the project itself. Right. So it was internal project. 69:06 Martin Marosi There wasn't many community documentation, et cetera. I also mentioned that part of the code lives into Openshift organization. We're trying to move it and move it into the Sculptor organization in one place. 69:20 Martin Marosi Prepare demos, documentation, everything. But as I said, it's been running in hybrid cloud console for two years. So you don't have to be worried that it's not stable, it's experimental or anything like that. 69:32 Martin Marosi There is more of the issue that documentation for internal project is garbage usually. So we are trying to fix that very quickly. So I would say that's like before, if you decide to put it on roadmap and integrate it fully, we have to figure that out. 69:39 Martin Marosi So whenever we have this version zero, it's actually prepared for integration with all the dependencies and everything. We at the right place. So we just don't have to make some weird updates because we have changed names or something like that. 70:08 Patrik Oldsberg I think especially if we just adopt straight off. There's a lot of thought that has to go into that. Of course, I think at the very least we'll be experimentally. Looking around, I can't imagine we'll land far away, to be honest. 70:34 Patrik Oldsberg Some of the stuff I mentioned, for example, the extensions that plugins export, we wrap those up in with additional backstage framework bits. I'm sure that can be added too. But we like to avoid extensions to the build system if possible. 70:53 Patrik Oldsberg That kind of changes runtime behavior. So I would be interested in solving that mostly through APIs, but it's not a requirement, it's a soft requirement. I don't know. 71:05 Patrik Oldsberg As for way forward, it's still like we have thinking to do to figure out where we can head. I will definitely forward, of course. I will read the RFC and forward that to other maintainers internally. 71:26 Martin Marosi Cool. 71:27 Patrik Oldsberg Do you have any other suggested things we should look at? For example, are you essentially recommending that we look at adopting scalpel as the microservice kind of framework as part of backstages? 71:46 Martin Marosi So yes, and the reason for that being is because we want existing modules that live in Redhead products to be able to be plugged in into the Backstage. That Redhead is the whole thing. That's basically the suggestion here. Now, it doesn't well, I'm not going to say we don't have to do it like that, right. 72:11 Martin Marosi Specifically with the extension part, it's going to be difficult to not use it, let me put it that way. Like with the micro application mount point, it's kind of easy because it's technically just a wrapper around the module federation with some bits added onto it. You would lose some features that scope Room provides, but that's fine with the extensions themselves as Openshift console is using it, that would become problematic if something else would be developed. 72:46 Patrik Oldsberg Most likely. So we will want to go look at Scalp Room properly, look at the technology, look at the experience. I think regardless we've done this in the past, looked at other solution, we'll want to back up and rethink like just blank slate. What kind of development experience do we want and what top level, top down design of what this system would look like for people that build plugins and applications and so on in backstage and then go back and see how well does it match and how close is Scalp from, for example, there as well? And where can we go? Okay, as for ways for I don't know. 73:35 Patrik Oldsberg I'll let you know when we know more. To begin with. 73:42 Tom Coufal I think we still want to we would like to see what we have a solution to. Maybe a similar problem is that the mode that should happen today, should have happened today and didn't. So I think at least for that we should meet again. 74:01 Patrik Oldsberg That's true. 74:03 Tom Coufal And I would appreciate any additional discussions about the RFCs and if we decide that we're happy with what we have for the front end, we can switch gears and start thinking about the back end microservice architecture and plugins there, which if you remember during the first meeting, we decided to break it apart. Maybe we can I can say for. 74:32 Patrik Oldsberg Back end, the back end system that we've done, it takes us almost all the way. I would say there's a little bit more needed, but we want to do the same thing in the back end. Ultimately, it's just that the front end is way further away at this point. So we're focusing there, of course. 74:52 Patrik Oldsberg But yes, that sounds good to me. 74:56 Tom Coufal I don't know enough about the new back end things. To me it seems harder to achieve this on the back end side as we just shown it's achievable on the front end. So whichever way we decide to go with the front end, allowing this to be independent is a key for us. Like, even if we choose to not use culprit, which I think would be a shame, but if we decide that. 75:29 Martin Marosi Way you're slightly biased. 75:31 Tom Coufal I'm definitely biased. But if we settle on a different technology, I'm still happy with that. And we can help go forward with that initiative as well. 75:46 Patrik Oldsberg I'm just looking for a healthy balance of not too much tech on our end, but also not too much magic somewhere else. When it's too close to core. We'll see. Cool. 76:00 Patrik Oldsberg It sounds like another meeting to discuss VMware. It's good either way. Let's talk again. 76:10 Patrik Oldsberg Cool. I'll ping Taras about that, and I'm sure he'll be able to set it up with the tool that I can't remember the name of. Anyway, good chat. 76:24 Tom Coufal Thank you. 76:26 Martin Marosi Take care. Bye. Thanks. |
Decision was taken today during the Adoption SIG to rename this working group to "Declarative Integration Working Group" with new scope being "installing plugins without modifying TypeScript" |
Meeting 4: Date: Sep 7, 2023Video Recording: https://drive.google.com/file/d/18Q9uZ2M7M9xsN2t7b4Ptn0CJ6_I2sZkY |
Meeting 5: Date: Sept 20, 2023Video Recording: https://drive.google.com/file/d/1GXQVeuAXHWBOsfV2CrJUtHJFfbc40usO/view |
Backstage Microservice Plugins Working Group
Meeting 1 (Feb 11, 2023)
Video Recording
Meeting 2 (Feb 28, 2023, 10:00AM - 11:00AM EST)
Video Recording not available
Meeting Information
Meeting 3 (March 23, 2023 • 10-11 AM)
Video Recording
Transcript
Meeting #4 (TBD)
Agenda
Choose a time https://doodle.com/meeting/organize/id/dJNZMxvd
The text was updated successfully, but these errors were encountered: