Afleveringen
-
Architect Tip: What is Technical Debt?
Welcome to Architect Tips presented by Clear Measure, a software architecture company. Empowering .NET development teams to be self-sufficient, able to move fast, deliver quality, and run their software systems with confidence. Make sure to subscribe on YouTube or your video podcast feed. If you have a question for the show, send them to [email protected] and the next tip could be yours!
What is Technical Debt?
Now we all want to move fast and deliver consistent quality and run our software in production with confidence knowing that everything works and we won't be surprised by bugs getting through. And at this point, if you haven't been paying attention to the podcast, we sponsor the Azure DevOps podcast, then check that out and specifically, listen to Episode 150 with Capers Jones. I highly encourage you to do so. Capers is probably the most prolific publisher of research in the software industry, and his research shows quality precedes productivity. And if you want your team to go fast and you focus on that, then your quality is likely to suffer. You won't go fast at all. It's an interesting dynamic, in an attempt to move fast. You may skip some automated test cases or push something out without fully thinking through all the aspects of the design and then when a production bug happens, a developer stops working on a feature and spends time fixing the bug. And in this instance, we discovered new work. We've known about new work that had to instantly move to the top of the priority list pushing back, other prioritized feature work and it's very likely that the developer loses half a day or more fixing the bug. And then there's more pressure after that to finish the feature. And if that feature doesn't have a complete design or complete level of automated testing coverage, then a bug in it will surface at some point soon. Repeating the cycle and on aggregate, the discovery of bugs and the efforts to fix them can end up consuming a lot of development time. And from the outside of the team, it can appear that development doesn't move very fast. But in reality, the development of new features people are waiting on move pretty quickly. They can be pretty simple features. But because of the quality issues, the remaining development capacity left in the week to actually work on new features is not a full-time effort. It could be a fraction of the development week. The point at which all development time in a week is used to fix existing defects or we're not working on developing new features at all. We're just fixing things that broke that is complete development bankruptcy, where were sunk.
Now, some people call this Dynamic Technical debt, and it's called this to invoke the metaphor of interest payments on a credit card. If my paycheck is $4,000 per month and I pay $100 per month to pay the interest that has accrued on a credit card on the amount that I've charged, then I can keep going and manage that because I have $3,900 per month left of available cash and pay other expenses, whatnot.
But if I continue spending above my means and charging it to a credit card, then there will come a time when I'm paying all of my money that I have every month just to pay the interest on the rather large balance. And at this points its all over a cliff, my margin for adjusting my r recovery is gone. I can't pay it down. I just don't have the means. So technical debt in software is in complete design and sufficient automated test coverage. You could think that hey, I paid for a feature or your business partners or customers paid for a feature and development. We pay for a feature by working on it, spending time on it. If I pay for a feature, but if I only pay for some of it, then the rest is technical debt. And so the rest of it I put on the charge account to be paid in the future and in the future when a bug surfaces because of an incomplete design or insufficient test coverage. I’lll have to pay down that debt and likely when I have no automated test covering all the paths. I'm going to slip through and I'm going to have to pay down that debt at that point of payment. Of course, in this scenario, all in the form of time. And anyone who's been doing development for some time has experienced situations, where they have their week, all planned out nicely.
Feature A we will chip on Tuesday, feature B well chip on Friday. And then in comes a bug surprise. Now, your schedule is hosed. You can try to explain why Feature A or B, didn't make it in time. But what will you say? Some gremlin got into the code and broke something. It's not my fault when it comes down to it. The only truthful thing to say is that either a previous design was incomplete or I didn't cover all the behavior paths with tests to prove them. Either way I told you, a feature was complete last week or a few weeks back and now we can see that the feature wasn't complete. And now I have to work more to finish the work that I had previously reported as finished. So that is what people talk about. When talking about technical debt, it's work that we should have done, but we put it on the charge account and it's going to come back to us at some point. So I hope this helps. Stay tuned for more tech tips and keep shipping.
Thanks for watching Architect Tips. If you would like help improving your team speed, quality, or software stability, send us a note to [email protected]. On -
How often should you be deploying your software to production? Welcome to Architect Tips, presented by Clear Measure, a software Architecture company, empowering .NET development teams to be self-sufficient, able to move fast, deliver quality and run their software systems with confidence. Make sure to subscribe on YouTube or your video podcast feed. If you have a question for the show, send them to [email protected] and the next tip could be yours.
Welcome to Architect Tips. I'm Jeffrey Palermo and today we're going to talk about how often to deploy to production. And if you have a question for Architect Tips, send it to [email protected] and from those submitted, we will pick a question. And if I can put it in to a short five-minute bite-size chunks, I'll do it. Otherwise, I’ll just send you an e-mail and answer it for you. So how often should you be deploying your custom .NET software applications to production no matter the size of the software or
individual chunks? Let’s dig into that and you're going to answer the question for yourself, but it's going to depend on a number of factors. The first one and foremost is the pace of your business. And if your business needs to give new things to customers once every three months. Well, then deploying to production... The minimum to deploy to production is once every three months. If your business needs to roll out changes to customers every week or even every day, then that's the slam dunk answer. Now, let's suppose that your business doesn't really intentionally roll things out to customers, but every few months you are still making improvements to the software, you're increasing the quality of your telemetry. You're making it run faster, you're making it scale better on fewer server resources and so you'll still be doing production deployments even if you're not actually giving anything to the customers and so the answer is you need to deploy at least as fast as your business moves and faster for other scenarios. Now let's back up from that question and talk about the pace of your testing because if you are testing performance, improvements or stability improvements, well that's also going to determine the frequency at which you deploy. Even if you give, let's say, product management the button to press to production and say, hey look, build such and such as ready for production. As soon as you press that button, it's going to go. If you're ready, and they're never waiting on engineering, that's success. Product management should never be waiting on engineering, engineering should be waiting on product management. Whatever form that takes in your company, whether it's Joe next door or whether you are in a larger organization with more formalized product management. But the pace of your testing can determine how often you deploy to a pre-production testing environment because every DevOps environment has three categories
of environments. The first is production, everybody has at least one of those. Next is a manual test environment. You need at least one of those, a lot of organizations have many of those, that's one category. The third category is automated test environments or test automation environments. I like to call those environments The TDD environment to invoke test-driven development so you have three categories of testing. Now the pace of your development is also going to impact how often you would deploy, not only to production but to test and your test automation environment and that the raw ability to deploy quickly comes back to how you do code branching, and if you are doing branches that live for days and weeks on end, you're not going to be able to do production deployments on any kind of frequency. So you need to have every individual change beyond its own short-lived branch. And when I say every change, if you're changing the way a button operates, that’s a branch. If you are adding a new screen, that's a branch. If you're adding a new field to an existing screen that also adds a corresponding column field to a database table, that's a branch and no branches should live more than two days. Every branch should be targeted to live one day. Something that I can get done in one day and oh two days, That's my fallback. Okay, maybe it does take me two days but if you have a branch open for three days and four days and five days; that means that either it was too much work or we didn't understand the design that we are going for, and I didn't really have a plan for implementing it or something else came up. Some new information I learned. And now I've learned it's more of an effort than I thought it was. So the size of the branch is going to make a difference. Now, here's another related question. How do we know if we're okay to deploy? Oh, we can hit the button and press to the point. Now I'm assuming you have automated deployments we can come back later but how do I know we’re okay to deploy when it comes back to quality in our pipeline. You know, invokes the metaphor of water...you want the water to be clean in the pipeline. You don't like to be dirty everywhere and then attempt to fill through at the end. So these are DevOps questions. Your pipeline should be able to answer for us. Is the change that I just made okay to share with my team? Well, that gets answered by the private build. That's the first tier in continuous integration. Does my change play well with other changes, other code changes elsewhere on my team? That's the integration build. The second tier of continuous integration and then third, does this release candidate, does this build that has been produced with all of our changes on the team, Does it still deploy? Does it still startup? Does it still generally function? Well, that is the answer of the third phase of continuous integration, which is that first deployment to the TDD environment or the test automation environment, where no humans really go. It's just to vet the release candidate. And then, as we get further down the line, if the TDD environment and in the acceptance test Suite, that runs their passes. Well, now we know, we're okay to share with stakeholders or product management testers. Whoever we give it, give that bill to beyond the engineering team and then finally, if they run it through its paces and they don't find any issues well now it's okay to share with the customers and so those are our answers as to whether it is okay to deploy to production. Whether we're okay to deploy to a test environment and whether okay to even share a branch with our team. So I hope that helps. And as always, if you need any architecture help, at Clear Measure we’re a software architecture company, we exist to help you move fast, deliver quality and run your systems with confidence. We want you to be able to do more internally so that you and your team can perform at another level and you're more than capable to deliver for your customers and to deliver her to your company, which is fantastic. Thanks for now.
Thanks for watching Architect tips. If you would like help improving your team speed, quality, or software stability. Send us a note to [email protected] on behalf of everyone at Clear Measure. Thanks for watching and may God bless you.
-
Zijn er afleveringen die ontbreken?
-
Welcome to Architect Tips presented by Clear Measure, a software architecture company. Empowering .NET development teams to be self-sufficient, able to move fast, deliver quality, and run their software systems with confidence. Make sure to subscribe on YouTube or your video podcast feed. If you have a question for the show, send them to [email protected] and the next tip could be yours.
Howdy! Welcome to Architect Tips. Today I want to talk about strategies for writing and maintaining automated tests for object-relational mapping whenever you have a hierarchy that you have mapped. And so on the screen, I have a sample just from a car auction, a car auction application, where we have an auction entry, but then we have three different types of entries. In an auction, things are typically called a lot. And if you're selling that, that sweet Chevrolet Corvette will be lot number such and such. And so you have competitive Lot, C of Consortium Lots, where multiple people can go in by organizations and you have an add-on only lot for donations, but they all derive from auction injury. So, in the code, you would see that a derived type lot would inherit from auction entering, okay? Well, what about your mapping? This is Entity framework and entity framework core or just entity framework and .NET 5 as well. And so, we're going to go to all auction injury map. We use this convention for every type of map, an aggregate root in domain design speak. We have an actual class so that the mappings don't get out of control and you want to be able to have a class per map. And if I go over to my data context, you can see that we have a method here that just kind of lists all of them and we can control the schema and in some cases, you want to put them in a different schema. That's just an aside but an auction entry, we've inherited, we've established it as table per hierarchy. So we've added a discriminator, okay, that's great. Just follow the documentation and in, in our code, we can find Consortium lot map and we can see that here, we have declared that our base type is auction injury. Although, even if you don't put that there, any framework, kind of figures it out. And this is our discriminator, it's only in the database, it's not in the object model and this derived type has an additional property in additional collection that are not on the base type. And so likewise we can go to another, the competitive lot. We can also go
to the, I add normally lot allow the donation that basically stays the same. There's no differences just yet. Although, this will grow. But over in our test, that's where it gets interesting. So I'm actually going to go to auction entry mapping tester and control F12. Notice that I have six different tests and over in Consortium, lot mapping tester only have two tests. However, because Consortium lot is at entry that means that all of the different queries and all of the different persistence and rehydration scenarios that work on auction entry should work for Consortium lot. So I have to duplicate those tests, the answer is no and this is how. So in our auctions remapping tester, we all we do is we take all of the tests that create an auction entry and instead we factor that out. We extract that as a method that returns auction entry and we market virtual. That gives us the ability to add some polymorphism given that our mapping is polymorphic when they are tests to be polymorphic as well. So we're going to do here, Control Alt B, we're going to look at derive types, and you can see that I have a mapping tester class for each of the drive types. And so if I look at Consortium lot, mapping tester, it, in fact, does inherit it overrides create auction entry and the usages of this are not found in this file. They're used in the base file. And so then we only have to actually have two more tests. Now, if I run it, my shortcut key. Alt Up and I run it, control TR. Let's let this run. And look, I have 11 tests for Consortium. Lot, mapping tester. Why? Well, it's because I have the two tests that are here, too. Plus, I have the six tests that are here. That's eight. But, let's go and look down. And a few of these tests have extra test cases. Okay, that brings us up to 11. So make your test fixtures polymorphic, and then all of your derived types, it'll be really easy to make sure that you are keeping to the rules of object-oriented programming and making sure that a drive type is an instance of the base type and we're not accidentally breaking some functionality that worked with the base type because I'll just jump up to bite you when you're using the application. And then there's some weird bug, it's because you're using a derived type that was returned back from query. So I hope this helps and make sure to subscribe to Architect Tips and whatever podcast directory you found us, have a good day.
Thanks for watching Architect Tips. If you would like, help improving your team speed quality or software stability. Send us a note to [email protected]. On behalf of everyone at Clear measure. Thanks for watching and may God bless you.
-
Welcome to Architect Tips, presented by Clear Measure, a software architecture company. Empowering .NET development teams to be self-sufficient, able to move fast, deliver quality, and run their software systems with confidence. Make sure to subscribe on YouTube or your video podcast feed. If you have a question for the show, send them to [email protected] and the next tip could be yours.
Hello and welcome to Architect Tips. Today, I want to talk about the essentials of a private build and a lot of people are trying to do continuous integration but they only have a manually, triggered or an automatically triggered compile process which is only one of the steps of continuous integration. Continuous integration has three steps. Actually, the first is the private build. We do it locally. The second, the commit phase,
has the integration build where you run on a build server and that's the bill for your team. In the third step is the first deployment and fully deploy test suites. And so, the continuous integration has those main three stages and a lot of the tooling makes it easy to have an automatically triggered compilation process but a lot of people leave it there. So I want to talk about the first step which is not even on the build server yet, but it is just something you have for the application itself. Whether it's a visual studio solution representing a big application, or whether it's a solution representing a small microservice that’s just a stand-alone Azure function or just an individual job of some sort. And so let's go through this and this is a build script that you can actually follow the structure. If you're familiar with my book, .NET DevOps for Azure, you can find the build script in the download tools in the download files there but I'm just going to go all the way to the bottom.
This is just straight power shell. At the bottom, we have a function called CI build, and then right above it, we have private build. And so if we look at what it's doing the private build, we have some chocolatey packages that this application needs. And so we use the build script to install those directly so that somebody can just clone the git repository and immediately run the build as opposed to get the Clone the git repository, then install this, then install that. Then configure this, then configure that and you know, we want the experience of, you know, the big things like you have to have the right visual studio version installed on the computer, clone the git repository, run the private build. So we're installing the chocolaty packages then in it essentially does clean compilation. That's running .NET, .EXE, or msbuild then we're going to do some environment set up in this application. And a lot of times, we have a SQL Server database or database of some sort. So we need to create a shell of a database locally and then after that, which is after unit tests to run, then we run the integration test that typically do a lot of data access as well as other things. So, that's the structure of it. And then I'm going to go back up to the top and kind of walk through each one of these individually.
So, up at the top again, this is just straight Powershell and all of the uniqueness. You could literally copy this power shell file. And there are so many, go back to 2005. I think almost every application and client that I have worked with has taken this structure, this layout, this template of a build script and used it in their applications. Now, of course, in 2005, it was not running with cruise control .NET. But essentially the same you need a script file that does the stuff, you pull all the unique stuff up into properties at the top. And so you see the project name, my project and later on, you'll see that we can just use that name and then put a .SLN on the back of it. Now we have our Visual Studio
Solution file. We have a source directory where the unit tests are, where the integration tests are, where the acceptance tests are. Most applications have some sort of user interface project and whatever, whatever you need the database path, where you want your build artifacts to be, where you want the test artifacts to be the path to your database, migration tool whether using roundhouse, alias SQL, and and even pulling a version number directly from a build server. You see, we're having some data parameters pulled in Via environment variables but if the variables aren't there, meaning it's running locally, we just default it to something, but if it's running on the team's build server, then it's pulling in variables through the environment. So we see in it, all that's doing is getting rid of artifacts from previous build runs. You see? We do a .NET clean, we do a .NET restore. That's essentially cleaning everything out. Then the compilation Project. Now, this script has you really want to copy this script and use it because over the years, all the command line switches for .NET .EXE are absolutely tuned and running in unit and I know the getting started samples online. They don't have that level of detail, but if you want, if you want a very tuned build with all of the command-line switches that you want to use, then just get this build script and copy it. So that's our
compilation process then we need to run our unit tests. We don't need to do any other environment setup like setting up a database because unit tests don't require database. They only call code that stays in process. So we call .NET .EXE, then test. And again these are command-line switches is the way you want to call it and then after that in the order. I know in order we have integration tests but actually the order of running, we're going to do a migrate database local which is calling our database, migration tool and is going to create the database schema from the folder where the scripts are in after that, then we're going to call integration tests which is going to call the other test library with in-unit and it could just as well be X unit and .NET .EXE Test, acceptance test is not used for a private build. It's used in the integration build for the fully deployed version so on the private we'll just skip that and then essentially we're done. But we have some other power shell functions in here. Because we want to use this exact same build script in our integration build process on the build server whether using your own build server or as Azure pipelines and so we have pack UI and pack admin. Those are two top-level processes that need to be deployed and then pack DB and we pack it in nuget packages. Octopack Octo .EXE is actually an open-source project from Octopus Deploy that is the best at packaging where Deployable applications into properly Formatted NuGet files and whether you using octopus deploy or not, this tool is indispensable. So that's the structure of, that's the structure of a private build script and you can see how when we come back to calling private build and it just runs it. So I hope this helps. Get this template, use it. But even if you don't use this format, make sure that you have an independent script at the top at the root of your git Repository that can set up the local environment and build and run all of the tests and validations that you have for that particular application. Whatever format you use, make sure that you have that asset for your development team. So that you can move fast, and increase and sustain quality and ultimately run your software systems with confidence. Thanks, and if you have a question that you'd like on Architect Tips, Just send me an email at [email protected]
Thanks for watching Architect tips. If you would like, help improving your team speed quality or software stability. Send us a note to [email protected] on behalf of everyone at Clear measure. Thanks for watching and may God bless you.
-
Welcome to Architect Tips presented by Clear Measure, a software architecture company. Empowering .NET development teams to be self-sufficient, able to move fast, deliver quality and run their software systems with confidence. Make sure to subscribe on YouTube or your video podcast feed. If you have a question for the show, send them to [email protected] and the next tip could be yours.
What's the difference between an architect and a developer? Let's talk about that. Now, as a developer, when you're joining a team, you're shown the ropes. You have a new workstation and you're given a tour of the documentation of the software at the source code and you're given a first assignment. You're given the spec or the design for something to change first, where to go, what the process is, how to build the software, how to test the software, run the automated test Suites, build, and deploy to not only locally, but maybe a test or a TDE environment before ready for manual testing and then you get to work. Now, I understand that some of your experiences as a developer are that you get thrown in there, you're told nothing, software has no documentation, the source code is disorganized, there's no build, no automated testing. No nothing. And you're given a bug and say, hey, go fix this and you’re fending for themselves, for yourself, either one of those scenarios.You, as a developer, your job is to write code that is a change to the Software System. A change that is determined to be useful by somebody else and to actually make that happen and produce a new build of software with that change without breaking something else. Okay, your job is to do things, is to implement the changes that are necessary. All right. Now let's talk about the world of an Architect.
Architect is what I do. If I'm an architect, well, your umbrella starts with possibly ambiguous conversations with somebody who is funding the software, or maybe who owns the software, and they have some business objectives, outcomes that they want from the software and they're talking about it generally. And it's your job as the architect to just hear what they're saying and what their goals are and to start formulating that
into some type of plan to make that happen and to make that happen via one software application between a mix of changes to a number of software applications, whatever the scope is. Because you as an architect, some architects work at the scope of a single small application, some Architects work at the scope of an entire organization with dozens and dozens of systems that all have a dependency graph between them. So as an architect, your scope can be really, really huge at massive companies and massive organizations or it could be very, very confined for a particular application, but the one common thread of what an architect does is translate the ambiguous to a business outcome facilitated by software and so the developer’s scope in the process is here, when we've already decided what we're going to change, now, go implement it in the code, the Architect’s job is very broad. Starting with whatever conversations with business stakeholders, all the way through breaking down the designs that we're going to select for proof of concept. If necessary, break it down into a sequence of work that can be done in order by a developer or multiple developers, the testing and the overseeing that we have proper quality control promotion to downstream environments. And then, as we push it into production to make sure that the business outcomes that we were designing for are actually happening. Because if it didn't we're not done yet because we were asked for a particular outcome and then we hypothesized that a set of designs will achieve the outcome. We aren't done until we've actually achieved those outcomes because it may take a couple of tries and say when we implement few designs, we made some progress but we're not yet there. Now we need to know we need to come up with an additional design to help with that. So that's you as an architect. Now architecture ,the word architecture itself, doesn't have a good definition, try doing an internet search, you're not going to find much or you're going to find a lot of stuff but you're not going to find agreement and you'll see things like architecture is the carefully designed structure of something. Or architecture is the hard stuff or architecture is the intersection of designs that produce the hole and you'll have a myriad of other commentary and message boards. All about what architecture is but that's not what's so important.
What I want to focus on is the role, not architecture, but developer and architect. And some of you listening to me, who are developers, you're thinking, wait a minute...I do all the stuff that you just described as the architect role, that's great. That means that you are fulfilling the architect role. And it's good to recognize that you're already doing some or a lot of that job. Now, the biggest thing about an architect is that Architects produce outcomes a long time old scripture says “you shall know them by their fruit”, Matthew 7:6 16 and that is true. Architects produce outcomes. If an architect is not producing an outcome by orchestrating an ambiguous request into everything concrete that needs to happen, then they're not doing the architect job. So hope that helps with a difference between an architect and a developer and go empower your team.
Thanks for watching Architect tips! If you would like help improving your team, speed, quality or software stability... Send us a note to architecttips@Clear - measure.com. On behalf of everyone at Clear Measure, thanks for watching and may God bless you.
-
Welcome to architect tips. I am Jeffrey Palermo. And I am going to show you today how to use architecture diagrams in a really, really easy way. Now as we go through, there is a lot of resources, a lot of, interesting topics on the Azure DevOps podcast. You will want to make sure to check out that as a resource to you and on this show architect tips, I will answer your architect questions. All you have to do is tweet me @jeffreypalermo and I'll pick a question and, if I can put it into a short five-minute bite-sized chunk, I'm just going to make one of these out of it. Otherwise, I will just send you an email and I will answer it for you.
So, go ahead and send me a question there. And so now let us get into architect, architecture, diagrams, and I want to, I want to make it easy and show you some, diagrams that I use. So, let us take a system you want to communicate. You want to communicate what you want to either be built or what you are going to build with your teammates then you need to draw something.
If it cannot be drawn, it cannot be built. Every profession that creates something has some type of design diagram. And so, we need that here. So, let us pretend that we are starting a work order application. We are going to build a work order application. And so, we need some diagrams that are going to communicate it, and bread and butter diagrams is going to be your diagram.
You could also think of it as a class diagram also. We are going to make ample use of sequence diagrams. Okay. Then you might have some patterns. In the case of a work order, a work order has a status, and it has a workflow. And so, we need to make use of a state machine diagram. And a lot of times, an activity diagram will be interesting or an overall dependency diagram.
So let's just take these and let's go through one at a time and see how quickly we can use these particular diagrams to describe a work order application in a fashion where you would actually understand what .NET code needs to flow from this. So, let us start with a dependency diagram, you know, I am a fan of onion architecture. If I am laying out a visual studio solution, and I say, you know what, it is going to be.net. It is going to be Blazor web assembly. We are going to use a C-sharp nine and we are going to use SQL server, we are going to pull it up, Azure. Well, one of the first things is what is my visual studio solution look like?
So, I am just going to quickly spec it out. I am not going to explain why, but I am going to have it core library. That is a, you know, dot net standard two. I am not going to go on all the rest of it. Then we are going to have a UI project. We are going to have a data access project, and I do not, I am not going to worry about my handwriting.
I am just going to do this. And then we're going to have, let's see what else, you know, I know that I need some unit tests, so we're going to have a unit test library and I'm going to have some integration tests. These are going to be projects. And so, you know to have him here and unit tests are going to be linked to core integration tests, going to be linked to data, access, and core most of the time.
And, if I put in acceptance tests, over here, most of the time they are linked to data, access core, and UI. So that is our test coverage. Now I did not put any arrows here. We need to communicate that. Well, the core needs to have no other dependencies. Then just .NET standard and whatever, most stable libraries. You are only as stable as your most stable library.
So, I am going to make sure that the UI references core, I am going to make sure that data access references, core unit just references core, ah, all incoming references. I like that. No outgoing references for core acceptance tests that let me just finish these. And I finished all of them. Alright. So that is our dependency diagram.
Now, what about our core? Object model or domain model. I love domain-driven, design the pattern, the language that it gives us. Let us just put the work order right here in the middle. And this is going to be a class diagram or just an object, an object diagram. I have a work order. A work order does need a status.
I think, you know, draft. So, you know, we need to status object and whether it's an enum or a, or an enumerated class, you know, maybe we call it work, order status, all abbreviated here. And we have things like draft, right? Maybe another one is submitted, maybe another one assigned, assigned to a technician, and maybe then it is in progress.
And maybe it is completed, and, or maybe canceled, you know, you can cancel a work order. We have a status. Okay, has a relationship and, you know, also we need, we need a submitter, somebody who is going to submit the work order. If I have an organization list this year, they say, well, there is another entity and employee is a submitter or what, and then, and then we need them.
Let us see, and then we need someone to assign the work order to so an assigned E and you know, you can assign it to a technician. Okay. Go but you know what a technician is also an employee. Let us say there's a, there is an inheritance relationship, open triangle. And, you know, maybe later on down the road, Well with approvers and whatnot, we may have a manager or a manager kind of is an employee, a technician has a manager, so we can do that. The technician is open I think we will; we will leave it for that. A manager is an employee. A technician is an employee, and a technician has a manager. All right. So, there is our object diagram. So, taking that object diagram, into consideration next, let us try to, let us try to figure out.
How are we going to put this application together? Let us do a sequence diagram for just creating a new work order when we created it. It's going to be in draft status starting out with a sequence diagram to explain this, then I have got my actor and I'm just going to blow through if you've never heard of whatever diagrams, just follow along.
I am not going to go into UI patterns. I'm just going to be abstract and say, okay, UI in the UI, a person who's going to use the UI and there you going to go to a screen and they're going to draft a work order. All right. So, what happens? Well, in the most simplest sense, if we're not really using any kind of scalable patterns, we might say that the UI uses the work order object directly and the controller or the screen code or whatever component is just kind of call the constructor of the work order directly. And then, you know, a common pattern. If I have a work order repository class, you know, maybe the UI then goes directly and says, Hey, go ahead and save, this work order.
And of course, then we have SQL server. And the workload repository is just doing an insert statement. Okay. That could be a possibility, but that doesn't scale at all because now we have to do all that for, for the manager or technician employee for it's just puts too much logic and basically guarantees all of your logic except for your SQL statements are going to be right there in the user interface.
So, we do not like that. We want to do something. I am going to do something a little bit better. So, let us talk about drafting. Let us consider an alternative and come up with a pattern and let us use a little bit from CQRS let us draft some commands and let us use a bus pattern.
So, let us try this again. How would we describe if we were going to do another panel? Let us ignore the user interface for a little bit longer, but you know, the actor is still going to draft the work order. Okay. And then instead of going directly to creating our aggregate route in our domain model, which would be work order, one of our entities that serves as a route of an aggregate let's instead craft a command and where we pull away the user's intent, the intent of the user or the request of the user is to draft a work order. So, let us say draft work order command and I am abbreviating, but in C# you would not abbreviate all these become types.
So, we would create this command like a DTO, and then we would pass it to a bus, or I bus, which an abstract. So, I would not say bus dot send. And then this was, you know, look around and look for a request handler. And if we are going to specify this type of pattern, then it is going to look for the right, handler, and say, Hey, will you handle this command?
Okay, great. We need to do that one. How do we do that? Well, you know, we need a lot of classes that inherit from here would, this would be a. Draft work, order command, and yes, all your diagrams, like this can be completely flexible. You can mix and match paradigms. The open triangle is inheriting from, and so through an inversion of control, an abstract word in extend this, and it is going to implement the handle method.
Okay. You can mix and match all of these, all these diagrams, all these things. And so, we have a handle method. Well, how does it do it? well, we are going to have a work order object, and this is the class that calls the constructor of the work order and set. So, drag status to draft. And now we need to say that let us say an OMF.
Well, let us, let us see how this, let us see how this continues. So, we have a draft work order, command over here. Or did I say command I am in the handler, as some of you are confused. We have a handler class that handles the command. Now in this handler, we can turn right back around, and we can create some type of command to send back to the bus.
So, if I were to say, save, you know, entity or save aggregate command. And then I created that command. I passed in the work order to this say baggery command. And of course, I get the object back and then I pass this back and say, bus dot sinned, passing this, save aggregate command. Well, now I am asking about, Hey, here is another, here is another command that you need to handle.
You need to route. And so, then what happens? Well, we go right back to our abstraction. We say, Hey, I need to save an object. and of course in this case, a work order would probably, you know, if you have I entity or some type of interface, you know, you would, you'd kind of do an inheritance relationship there, but let's just say we have a save aggregation handler that, Oh, by the way, inherits from I request handler.
And so, in this case, this is how that relationship would work, it would implement the handle method. And then if I am just having to extend this over here for drawing sake, then we would have SQL server and then we would have an insert statement. Okay. So that is how we can, we can use it. our sequence diagram to describe what we are doing is a sophisticated application pattern where after you design this every.
Every one of these would then be on the docket for development tasks. Hey, we need the draft work order command. We need the Ibis. We need the, I request handler. We need a draft work order handler. We need, I entity work order class, say that area command a saves aggregate handler. And those would all be code files that we need to create now at work order is, is something that has a process that has a flow.
And so, we can use a state machine. And so, if I have, if I have a work order right here, the first status, the first status that we have is draft. Okay. The next status we have is submitted. All right. And these are all, these are all facts. So, these are all past tense as a status. Okay. It is, it is not a verb because it is something that is submitted or is drafted.
And then another status would be, it is assigned and then maybe the next status is it is in progress. Okay. And the next status is complete. I'm going to ignore canceled for now, but you can, you can see that we have transitions kind of asynchronous workflow that this work order has now all of these arrows, you name them with a verb.
So, if we are going to. Go from, a draft to submit. Then we are going to submit the work order to go with this transition. We are going to assign, see that verb. And if we are going to go to assign it in progress, we are going to begin. We are going to go to in progress to complete, we are going to complete it, which.
It is the same verb. Okay. Now let us say I have assigned it, but now I say, no, I want to, I want to bring it back. I want to bring it back to this status. Now we can go backward, and we can say maybe unsub, not unsubmit but on a sign. And if I go backward, I can submit. So. And if it is in progress and Oh, wait a minute.
Well, maybe I can. maybe I can, maybe I can shell it, whatever word you want to bring up. And if I want to, if it is completed and I go in progress, maybe I call it, hey man, I am going to reopen the work order. Now, what do we have here? We have a lot of lines. We have a lot of pebbles. How do we translate this into actual code?
Well, we already have our work order. It is an entity. We have declared it to be the root of an aggregate and domain-driven design speak and draft. Is a status. Okay. Whether we use an enumeration or whether we use an enumerated class draft, essentially as an instance of status, same with all these. And then every one of these lines is a command.
So, let us just make up an abstraction. We need an abstraction to talk about and reasonable. So, let us just make one up. I let us see. I state transition. Okay. Or yeah, I, I state changes and that is great. And so, if we have an ice state transition, that means that every one of these lines could represent an implementation of ice state transaction.
All right. So, what would, what would we really need this abstraction to be defined as well? We absolutely need to know if this is valid. Okay. and if we were to, we need the work order to see if it is valid. And then how about, is allowed, is allowed maybe the work order and whatever employee is trying to do it.
And that way we can create an implementation, we can say submitted state transition. Well, Hey, is this one valid? Well, it is. If the status of the work order is drafted. But it is not valid. If the status is something else. Well, is it allowed? Well, let us make up a business rule here if we are in progress. And, if we are in progress, is the assigned state transition allowed?
Well, it is not valid. but allowed might only look at who it is. So, if, if the work order is assigned, To the employee that has passed in, well, maybe it is allowed because they can do something next. But if it is not assigned to them, maybe is allowed, no, you cannot, you cannot transition. You cannot do anything with a work order that you are not assigned to.
So, this is how we make these types of pattern decisions. And this is a state diagram. And then. Let us just count. How many of these implementation classes we are going to create? One, two, three, four, five, six, eight. We need eight implementations of this, of this class. And so, as we draw these diagrams, it, then it becomes obvious what.
Tasks to perform and how to fill up our development task board, how to break up a feature. That has come on our Kanban board and decompose it to the different development work that we need to perform. And so, use a whiteboard use paper. I do not immediately go to Visio. I do not even immediately go to plant UML I just use your pen or pencil and draw some of these application architecture diagrams, right.
Because in order to get clarity with everybody, you need to have a picture because if you're just using your words, everybody is trying to get the same picture in their head of what you intend to do and how to do it. Everybody needs to build that picture for themselves. And, and if you let everyone do it just from words, everyone is going to have a slightly different picture.
So, if you can distill your understanding and your decisions into a picture now you've given them the picture instead of forcing them to come up with their own, which is invariably going to be a little bit different than the picture in your head. So, I hope that helps again. If you have an architecture question, just tweet at me at Jeffrey Palermo, and I will be happy to answer your question.
And as always, if you need any architecture help, clear measure, we are a software architecture company. We exist to help you move fast. You deliver quality, you run with confidence, you can do more internally so that you and your team is performing at another level, and you're more capable and you're delivering for your customers and delivering for your company, which is fantastic.
So, thank you so much. And until the next time.
-
In this architect tip, we’re going to be talking about Versionable Architecture Diagrams! As always here at Clear Measure, we are a software architecture company, and our goal for you is to be able to move fast, deliver quality, and run your systems with confidence!
Having architecture diagrams that work for you as part of that. Now we want to have beautiful diagrams just like this one, but doing them in Visio, it just is hard. So, let's get into it.
The first thing that we need to do in order to get started with these types of diagrams in this method in a versionable fashion is to install a few chocolatey components. And it's really easy. You can download the files. If you've ever used chocolaty before, you just run these components, make sure you're in a power shell window that's running as an administrator. After that happens, you open up VS code and install the PlantUML extension.
We already have it installed here, so it's ready to go. And immediately after that, you can start creating your own diagrams with PlantUML. Now you can use PlantUML, wrong or you can use some C4 extensions, which we'll talk about. Now, it starts with your development process. And if you've gotten into our onion DevOps patterns, then this will be very familiar with you. But this is essentially describing a DevOps pipeline for a particular application or for a particular team.
We have our git repository; our integration build, that kicks off a series of environments that we have with our TV environments or multiple or UAT manual testing, environment production, pushing telemetry over to Azure, application insights, and then all of the deployments, getting the deployable packages from Azure artifacts. So we want to describe this, but we don't want to mess with Visio or any of the diagramming tools and so we can do it in text. And this is that diagram in text and we're using the helpers the Azure plant, UML extensions and you can get those on GitHub.
We define an actor, which is a developer, and we just start using the symbols and their objects. And these are methods essentially. And so we're using the symbols were defining the structure and then we're defining the process. The actors, the developer makes a change and pushes to Azure DevOps. And you can see each of the different objects is related either forward or backward with the single hyphen or double hyphen. And we can describe what they're actually doing.
The text that we use to describe them is the text that is painted on the arrow between the symbols. And so the text on the left is translated easy, super easy versionable it's all text.
And so the next type of diagram is the system level diagram. Example is if you're developing a new system, you already have an ERP system that a supplier uploads a file to and you want a customer to receive a text message. So we're kind of sketching out this new application that might have to consume some messages from our ERP system. OK, that's the system boundary. And so the system level diagram for this new application is going to be really, really simple.
We define two persons. And again, we're using the C4 extensions defined by Simon Brown. And there's several supporters of Riccardo's, one of those that has some extensions out there for pommel and you can call it plantUML, you can call it pommel. But we're defining our extensions or defining the relationship between extensions. And that's it. We just hit Alt D and run the diagram. Easy as pie. We can change the layout left to right and it just kind of realigns and does its best to guess the direction of the diagram.
But that's it for the system level. OK, so now that we've done the system level, maybe we want to zoom in to how we're going to structure this new application. The next level is the container level in C4, the container diagram. So we have this new application. Let's zoom in and then we try to figure out what is going to comprise that new application. We still have our ERP system, still have Twilio, still have the customer and the supplier.
But now we're going to make the decision like, oh, maybe we'll use Blazor Web Assembly as a client. Maybe we'll use a .Net Core API running on the server. Maybe we'll use a SQL Server database. So we're now we're starting to define the architectural elements of how this new application is going to come to be. And in the same way, we just look at the text on the left and is very, very straightforward. We have a key new level, which is the system boundary.
And within the system boundary, we've elevated one of our external systems to the system boundary. Then we have some containers for each piece that make up the new system and then relationships between them again, REL is short for relationships. So what do we need to dig in even more? We're going to zoom in again and start defining maybe some patterns for how we're going to implement Blazor, how we're going to implement the logic on the back end in the .Net process that's calling our database and running.
So here we can see that we're describing a synchronous application bus and we have some command handlers and we have some query handlers and those are talking to the database. Maybe one of the command handlers calls out to the SDK or the API of the Twilio service in order to send the text message. And so that's great. Real clear level of understanding by this diagram for what we intend the structure of the code to come out to be. And in the text on the left, the definition of this diagram, again, super straightforward.
We have a container boundary for the application, the .NET Core application that's running on the server. And we're breaking that up into components and relationships between it. And so, again, you can just define in text all of these symbols. And if you are familiar with markdown for documents, it's the same. So once again, to review, install a few dependencies via chocolatey. These are the three things that we need for the VS code extension to be able to generate these diagrams.
Alt D is the hotkey in order to make the diagram pop up, hold down the alt key and press D and you can have system level diagrams and diagrams at various levels working for you. Hope this helps. Happy diagramming!
-
Welcome to another architect tip. I am Jeffrey Palermo, your host, and we are going to be talking a little bit about a tip specific to the new Blazor framework for .NET Core. We talked to Steve Sanderson, the original inventor of the first version of Blazor on the Azure DevOps podcast recently, so you might be interested in checking that out. What we are going to do here is talk about how to track your circuits and how to know how many people are using your application and your distribution. This one is going to be specific to a tip specific to the server side because your client running in JavaScript is the same and it is going to be running in a browser. Then the Razor components are going to stream over web sockets, the changes to your screen. As you look at your development tools in your browser, you are going to see a bunch of binary messages going across. Those are the actual changes to your screen and that is the communication.
Now, if you're running in Azure and you are running with multiple instances and you have some custom auto scaling rules, which you're going to want to do, and you're going to want to custom auto scale. Then you are going to have the question, how many of the users are tied or how many of the connected circuits are tied to each of the web servers? Because you are going to be using sticky sessions. That is the ARR Affinity as Azure calls it. And, so once a user gets assigned by the load balancer to a particular web server instance in your app service plan, it is going to stay there for the life of the session, which is the circuit. So, you are going to want to know, okay, how many what is my distribution?
Do I have one that is overloaded? If you have scaled up, that does not necessarily cause the user sessions to be more evenly distributed. Once a user is assigned to an instance it is there. You can scale up after the fact, but that is only going to affect new users that come in after you have scaled up.
If the original instances are already overloaded, they are going to remain overloaded unless you do something to specifically force the closure of those circuits, and then you can have the auto retry logic. So, what you are going to want is to graph in your dashboard and application insights.
That looks a little bit like this and, on the top right you can see average open circuits and then bottom left, you can see the average connected circuits, and, on the bottom, right is how many circuits have disconnected over time. Then it is often also good to track them memory of each of the instances of your application.
We can see that on the top left in this example, and you do not get this out of the box. You are going to need to emit some custom metrics, to application insights. I'm going to show you how to do that with Blazor server side and as I go over to visual studio, just to show you that there is a class in Blazor called circuit handler, and you can find it in Microsoft.ASPNetCore.components.server.circuits.circuithandler. In .NET 5, that is going to be really easy to find a refactored with .NET 3.1.8 you are going to find that in the components.server.web assembly nuget package. You are going to have to grab the WebAssembly nuget package to get something specific to, Blazor server side, but that is where it is. You're going to inherit your own class from this, register it in your services collection, and you can do with it, whatever you want, because you're going to get events for circuit up and down.
Look at what we've done here, we've created the names of the different events and we've created a dictionary for open circuits and then a timer so that we don't, we don't emit this metric every single time it fires. Then the events that you get are circuit opened and circuit closed.
Look at that with circuit open, it is going to be called for us by the framework. In our dictionary, this circuit is open using a dictionary. We make sure that we do not have duplicates. Whereas we found in real practice that these events, get called and fired, not completely synchronous.
You will want to use some type of hashing, mapping or dictionary, deduplication, but just using a dictionary works just fine. So open circuits, that is our logic there. You could use a simple int counter, whatever logic you want. You get the opportunity to do something when the circuit opens and when the circuit closes. You also get events that are correlated on connection up and on connection down.
We do the same thing with the dictionary, on those things. So that is your tip. That is your architect tip for, Blazor server-side connection tracking. It is going to be important as you get your applications into production, to know how many open circuits you have per web instance and what your traffic is overall.
So, I hope you enjoyed this architect tip, and we will see you next time.
-
Welcome to Architect Tips with a tip so that you can get your team to move faster, deliver quality, and run your system with confidence!
We will talk about the architecture of Blazor and some of the key differences if you are running a Blazor application. Before we do that, you might wanna check out Azure DevOps Podcast; for .NET developers who are shipping software with Microsoft platform technologies; go to www.azuredevops.show.
Blazor is a different architecture; it is a new architecture. Blazor runs on top of .NET Core, and the server-side has been out for several months now. WebAssembly just came out in May, and people are still trying to figure that out. Many applications are being developed already with the Blazor server-side model. I want you to understand the key differences in that model from regular web applications so that you can be successful. What is important to understand is that Blazor runs on top of ASP.NET Core. So, startup, the middleware, all that stuff running it in Azure will be the same. It will run in process, but the Razor components are a different programming model in the UI and are different from your ASP.NET MVC controllers or web API controllers, and it is a stateful programming model. When a Razor component essentially paints a screen, that class will be in memory; it will be on the call stack in memory for the entire time that your user has that screen open in their browser. What happens is when you have that first request to your URL, your web server is going to return a JavaScript file. That JavaScript contains the Blazor client, and it is scanned, so you do not need to mess with that. The client will subscribe to a built-in SignalR connection run by your web application, automatically Blazor publishes a SignalR hub, and every bit of communication between the browser and the server is going to go across that SignalR hub. If you are hosting it in Azure, you will want to use the Azure SignalR service because it is scalable and will take that processing power off your web instance.
However, it is important to understand what happens and that Razor component class is going to be on the call stack and is going to be alive in memory, that instance of the class, the entire time that screen is on the page, no matter how many times things are clicked, no matter how many times maybe a section of the page is swapped out, that is going to be on the call stack.
Now, if you do a navigate URL to another top-level page, well, then it will go out of memory and be cleaned up. That is how Blazor server-side works. Every session, every user’s screen, and what a user is doing is resident in memory on your web server, and the changes to the screen are messaged through that SignalR hub with binary messages. So, what is important is the latency in the network connection. If you have really bad latency, then your users are going to see slowness. For instance, if you have 100-millisecond latency, you may need 10 round trips to the server, which is not unheard of, these messages are really small, but if it is 100 milliseconds of latency with each one, well, that is one second. So you have to keep that in mind, that is the architecture.
Let's now go over some of the settings. When you are publishing to Azure, you will want to make sure that you choose 64 bit because you will be using up more memory. Every one of your agents, every one of your users, keeps that memory on the server and their session, everything they do, and all of the memory keep those objects on the webserver. You will be using more memory as a trade-off for that phenomenally fast development model.
You can crank out applications so quickly it is so much more productive than JavaScript and the JavaScript ecosystem. Now, you will want to turn on web sockets. Off by default, you turn on web sockets. We do rely on SignalR in the custom built-in SignalR hub. If you leave that off, SignalR is going to fall back to long polling, which will send a lot of requests to IIS. You will be able to see that in application insights that your application is not doing anything, but you are getting a ton of requests; it is a tip-off that maybe you do not have web sockets.
You can also press F12 in the browser tools and make sure that you’re connected to your WSS protocol for the web socket. You also want to have ARR Affinity because we keep around in-memory objects for user sessions; you want to make sure that essentially you are turning on sticky sessions.
The first request comes in, a user will be assigned by the load balancer to one of your web instances, and that user will use that web server instance the whole time they have that tab open for that session.
They may be assigned to a different one if they close the browser tab and come back and essentially recreate everything from scratch, but you’ll be on one web instance.
Also, you want to learn how Blazor server-side interacts with the browser because all of the data, everything to show on the screen, is going to be coming through binary messages on a SignalR circuit, and then that JavaScript client, that is the Blazor JavaScript client that’s running in the browser is going to take that information and append children to the dom in the browser.
You can inspect all the different frames coming that are coming just by looking at the performance tab in the Chrome developer tools. It is really important that you understand how many frames are necessary per page because if you have a ton, it will make for a slow page. So, you want to use that as a point of optimization.
You will also want to look at how many round trips your application is making to the server. You can go to the network tab in the tools, and you can click on the web socket connection, and you can see all these different binary messages. If you select one, you can see what it is doing at every stage, every message that’s called a round trip. We can see the OnRender completed right here, and I have zoomed in on the bottom.
You will want to inspect that and understand what your message is doing with these binary messages. Now, the Canary version of Edge, the Chromium version of Edge, has some enhancements that are being worked on in the developer tools to show a visualizer of these binary messages so that you can sniff the wire and you can see exactly what is coming through. Right now, it is not very helpful to see the current views other than it came through and how big these are, but the tooling is coming so that we can see exactly what is in these messages because that will be important.
When you are running an Azure, you want to have a minimum of two web instances. Because it uses sticky sessions, and if something happens to that web instance, you want the web browser to immediately be able to take all of the connected sessions and move them to another instance of the webserver.
With the auto-reconnect behavior that Blazor has in the JavaScript client, that can happen automatically for you as long as you have that second web server that those users can be assigned to.
I hope that helps and if you are starting a Blazor project, let us help you avoid those pitfalls. Clear Measure is a software architecture company that really wants to help your development team move fast and deliver quality and run your systems with confidence so that you can get more done internally within your team and deliver world-class results. Thank you very much, and that was another Architect Tip.