Sunday, November 29, 2015

A Collection of Principles for Fail-Fast

Previously I blogged about how EdgeHTML has adopted a model of fail-fast by identifying hard to recover from situations and faulting, rather than trying to roll back or otherwise proceed. There we covered a lot of details on the what and even the how. At that time I didn't establish principles and since writing that article I've received a lot of questions from my own developers around when to use fail-fast. So here it is, the principles of fail-fast.

Principle #1 All memory allocations shall be checked and the process fail-fast on failure.

Follow the KISS principle and just assume that all memory conditions (including stack overflows) are leading to a situation in which even if recovered the first, second or third party code will not run correctly.

Exceptions:

Exploratory allocations may be recoverable. Textures are a commonly used resource and are limited in availability. So some systems may have a recovery story when they can't allocated. However, even these systems likely have some required memory, such as the primary texture, and that should be demanded.

Principle #2 Flow control is ONLY for known conditions. Fail-fast on the unknown.

When writing new code favor fail-fast over continuing on unexpected conditions. You can always use failure telemetry to find common conditions and fix them. Telemetry will not tell you about logic bugs caused by continuing on the unexpected path.

A prime example of this is when using enumerations in a switch. Its common practice to put a non-functional default with an Assert. This is way too nice and doesn't do anything in retail code. Instead fail-fast on all unexpected flow control situations. If the default case handles a set of conditions, then put in some code to validate that ONLY those conditions are being handled.

Third party code is not an excuse. It is even more important that you use fail-fast to help you establish contracts with your third party code. An example is a COM component that returns E_OUTOFMEMORY. This is not a SUCCESS or S_OK condition. Its NOT expected. Using fail-fast on this boundary will provide the same value as using fail-fast in your own memory allocator.

Exceptions:

None. If there is a condition that should be recovered then it is a KNOWN condition and you should have a test case for it. For example, if you are writing code for the tree mutations caused by JavaScript operations on the Browser DOM, then there are known error recovery models that MUST be followed. No fail-fast there because the behavior is spec'ed and failure must leave the tree in a valid state. Maybe not an expected state for the developer using the API, but at least spec'ed and consistent.

Principle #3 Use fail-fast to enforce contracts and invariants consistently

Contracts are about your public/protected code. If you expect a non-null input, then enforce that with a fail-fast check (not much different from the allocation check). Or as before with enumerations, if you expect a certain range then fail-fast in the out of bounds conditions as well. When transitioning from your public to your private code you can use a more judicious approach since often times parameters have been fully vetted through your public interface. Still, obey the control flow principles.

For variable manipulation within your component, rely on checks for your invariants. For instance, if your component cannot store a value larger than a short, then ensure that down casts aren't truncating and fail if they do. This classically becomes a problem between 32 and 64-bit code when all of a sudden arbitrary code can manipulate values larger than originally designed for.

While a sprinkling of fail-fast around your code will eventually catch even missed invariant checks, the more consistently you use them, the closer your telemetry will be able to point you to the sources of failure.

Exceptions:

None. Again, if you find a condition hits too often, then you'll be forced to understand and supply a fix for it. Most likely a localized fix that has little or no impact on propagating errors to other surrounding code. For instance, truncation or clamping can be a designed (and perfectly acceptable) part of the component depending on its use case.

Principle #4 If you are unsure whether or not to use fail-fast, use fail-fast

This is the back-stop principle. If you find yourself not able to determine how a component will behave or what it might return (this can happen with black box APIs or even well documented, but closed APIs) then resort to fail-fast until you get positive confirmation of the possibilities.

As an example some COM APIs will return a plethora of COM error codes and you should not arbitrarily try to recover from the various failures or figure out which codes can and can't be returned. By using fail-fast and your telemetry pipeline you'll be able to find and resolve the sets of conditions that are important to your application and you'll have confidence that your solutions fix real world problems seen by your users.

Oddly, this is even more critical when working on pre-release operating systems, services or APIs. Often the introduction of a new error code or the increase in a specific set of error codes is indicative of an OS level bug. By tightening the expectations of your application on a specific API surface area you become one of the pinning tests for that API. While APIs do change, having unexpected or new behavior propagate through your application in unexpected and unintended ways is a bug waiting to happen. Better to crash and fix than to proceed incorrectly.

Exceptions:

Yes, of the fail-fast variety please ;-)

Sunday, November 22, 2015

From State of the Art to the State of Decay

I'm constantly in meetings where we are discussing 10+ year old code that "smart people wrote" so it must be fairly good. It seems the people take offense when you call 10 year old code bad names especially if they were somehow connected.

Let me start with myself. I've been working on the Internet Explorer code base for over 10 years, having started just before IE 7 Beta 1 was shipped. Instead of referring to other people's terrible code, I'll refer to my own terrible code. I'll also talk about our thinking on "State of the Art" at the time and how that leads to now where the same code which was the bees knees is now just a skeleton of its former self having suffered through the "State of Decay".

State of the Art in 2005

We have a legacy code base that has been dusted off after many years of neglect (IE 6 had been shuttered and IE 7 was the rebirth). The ultimate decay, many years of advancements in computer science, none of which were applied to one of the largest and most complex code bases you could conceive of. We had some new tools at the time, static analysis tools, which were very good at finding null pointer dereferences alongside actual security bugs. We had tens of thousands (maybe even hundreds of thousands) across all of Windows that needed to be investigated, understood and fixed.

Our ideas of state of the art at this time were that the code shouldn't crash. Ever. Reliability in our minds was error recovery and checking null's all over the place and protecting against the most malicious of external code since after being shuttered we had no idea who or how many consumers of our code there was. Fixing any single line of code presented us with a challenge. Did we break someone? In this state every line of code could take hours to days to understand the ramifications of adding even a single null check and bail-out. And spend hours and days we did.

To the extent that I personally introduced hundreds, perhaps thousands of null pointer checks in the code. After all, this was the state of the art at the time. We wanted stability, reliability and we wanted most of all to shut down those rare cases where the tool was pointing out an actual security hole as well. Of course I thought I was doing the right thing. My code reviewers did too. We all slept well at night getting paid to put in null pointer checks to work around the fact that our invariants were being broken constantly. By doing all of this work we were forever changing our ability to use those crashes we were fixing to find actual product bugs. "State of the Art" indeed. So how should those conversations about 10+ year old code be going? Should I really be attached to those decisions I made 10 years ago or should I evolve to fix the state of decay and improve the state of the art moving forward?

State of Decay in 2015

Those decisions 10 years ago have led to a current state of decay in the code. Of those thousands of null pointer checks, how many remain? When new developers look at the code, what should they glean from the many if-checks for conditions that may or may not be possible? The cognitive load it took me to put one in was hours and even sometimes days. What about the cognitive load to take one out?

It is now obvious that the code is no longer state of the art, but bringing the code up to similar quality as new code is an extreme amount of work. It is technical debt. To validate the technical debt we talk about how long that code has been executing for and "not causing problems" and we bring up those "smart people" from the past. Yes we were smart at the time and we made a local optimum decision to address a business need. It doesn't mean that maintaining that decision is the best course of action to resolve future business needs (in fact it is rarely the case that leaving anything in a state of disrepair is a good decision as the cost of deferring the repair of decay is non-linear).

I gave a simple example of null pointer checks to address reliability. There were many more decisions made that increased our technical debt versus the current industry state of the art. Things like increasing our hard dependencies on our underlying operating system to the extent that we failed to provide good abstractions to give us agility in our future technology decisions (I'm being nice, since we also failed to provide abstractions good enough to even allow for proper unit testing). We executed on a set of design decisions that ensured only new code or heavily rewritten code would even try to establish better abstractions and patterns for the future. This further left behind our oldest code and it meant that our worst decisions in the longest state of decay were the ones that continued accruing debt in the system.

Now there are real business reasons for some of this. Older code is more risky to update. Its more costly to update. It requires some of your most expert developers to deal with, developers you'd like to put on the new shiny. Not everyone's name starts with Mike and ends with Rowe and so you can risk losing top talent by deploying them on these dirtiest of jobs. Those are valid reasons. Invalid reasons are that smart people wrote the code 10+ years ago or that when you wrote that code it was on state of the art thinking. As we've noted, the state of the art changes, very quickly, and the new code has to adapt.

Start of the Art in 2015

Its useful then to look at how many "bad decisions" we made in the past since if we could have told the future, we could have made the 2015 decision and been done with all of this talk. Our code could be state of the art and our legacy could be eliminated. So what are some of those major changes that we didn't foresee and therefore have to pay a cost now and adapt?

Well, reliability went from trying to never crash, to crash as soon as something comes up that you don't have an answer for. Don't recover from memory failures. Don't recover from stack overflow exceptions. Don't recover from broken invariants. Crash instead. I recently posted on our adoption of the fail-fast model in favor of our error recovery model and the immense value we've already gotten from this approach. You can read about that here, "Improving Reliability by Crashing".

Security has seen an uptick in the number of exploits driven by use after free type conditions. Now, years ago a model was introduced to fix this problem, and it was called Smart Pointers. At least in the Windows code base where COM was prevalent this was the case. Even now in the C++ world you can get some automatic memory management using unique_ptr and shared_ptr. So how did that play out? Well, a majority of the code in the district of decay was still written using raw pointers and raw AddRef/Release semantics. So even when the state of the art became Smart Pointers, it wasn't good enough. It wasn't deployed to improve the decay, the decay and rot remained. After all, having a half and half system just means you use the broken 50% of the code to exploit the working 50%.

So basically "smart pointers" written by "smart people" turned out to not be the final conclusion of the state of the art when it came to memory lifetime and use after free. We have much, much better models if we just upgrade all of the code to remove the existing decay and adopt them. You can read about our evolution in thinking as we advanced through "Memory Protector" and on to "MemGC". If you read through both of those articles one commonality you'll find is that they are defense in depth approaches. They do completely eliminate some classes of issues, but they don't provide the same guarantees as an explicitly reference counted system that is perfect in its reference counting. They instead rely on the GC to control only the lifetime of the memory itself but not the lifetime of its objects.

So there is yet another evolution in the state of the art still to come and that is to move the run-time to an actual garbage collector while removing the reference counting semantics that still remain to control object lifetime. Now you can see how rapidly the state of the art evolves.

We've also evolved our state of the art with regard to web interoperability and compatibility. In IE 8 for the first time we introduced document modes after were created a huge mess for the web in IE 7 where our CSS strict mode changed so much from IE 6 that we literally "broke the web". Document modes allowed us to evolve more quickly while providing legacy sites on legacy modes the ability to continue running in our most modern platform. This persisted all the way until IE 11 where we then had 5 different versions of the web platform all living within the same binary and the number of versioning decisions was reaching into the tens of thousands. One aspect of this model was our Browser OM which I've recently documented some aspects of, "JavaScript Type System Evolution", if you are interested in going deeper.

So the state of the art now is EdgeHTML and an Evergreen web platform. This in turn allows us to delete some of the decay, kind of like extracting a cavity, but even this carries so much risk that the process is slow and tedious. The business value of removing inaccessible features (or hard to access features) becomes dependent on what we achieve as a result. So even the evergreen platform can suffer from decay if we aren't careful and still there are systems that undergo much head scratching as we determine whether or not they are necessary. But at least the future here is bright and this leads me to my final state of the art design change.

The industry has moved to a much more agile place of components, open source code, portable code and platform independent code. Components that don't have proper abstractions lack agility and are unable to keep pace with the rest of the industry. There are many state of the art improvements that could apply to this space, but I'm going to call out one that I believe in more than others, and that is replacing OS dependencies, abstractions and interfaces with the principles of Capabilities. And I mean Capabilities in terms of how Joe Duffy refers to them on his recent blogging on the Midori project. I plan on doing a deep session on Capabilities and applying them to the Web Platform in a future article, so I won't dive in further here. This is also a space where, unlike the above 3 state of the art advancements, I don't have much progress I can point to. So I hope to provide insights on how this will evolve the platform moving forward rather than how the platform has already adopted state of the art.

Accepting the Future

When it comes to legacy code we really have to stop digging in our heels. I often find people with the mind set that a given bit of code is correct until it is proven incorrect. They take the current code as gospel and often copy/paste it all over the place, using the existing code as a template for every bit of new code that they write. They often ignore the state of the art, since there aren't any examples in the decaying code they are working on. It is often easier to simply accept the current state of that code than it is to try and upgrade it.

I can't blame them. The costs are high and the team has to be willing to accept the cost. The cost to make the changes. The cost to take the risks. The costs to code review a much larger change due to the transformation. The costs to add or update testing. They have to be supported in this process and not told how smart developers 10 years ago wrote that so it must have been for a reason. If those developers were so smart, they might have thought about writing a comment explaining why they did something that looks absolutely crazy or that a future developer can't understand by simply reading it.

You could talk about how they were following patterns and at the time everyone understood those patterns. Well, then I should have documents on those patterns and I should be able to review those every year to determine if they are still state of the art. But the decay has already kicked in. The documents are gone, the developers who wrote the code don't remember themselves what they wrote or what those patterns mean anymore. Everyone has moved on, but the code hasn't been given the same option. Your costs for that code are increasing exponentially and there is no better time than now to understand it and evolve it to avoid larger costs in the future.

We should also all agree that we are now better than ourselves 10 years ago and its time to let the past go. Assume that we made a mistake back then and see if it needs correcting. If so, correct it. If not, then good job, that bit of code has lived another year.

If you are afraid of the code, then that is even more reason to go after it. I often here things like, "We don't know what might break." Really? That is an acceptable reason? Lack of understanding is something we accept in our industry as a reason to put yellow tape around an area and declare it off limits. Nope. Not a valid reason.

The next argument in this series is that the code is "going away". If its "going away", then who is doing that? Because code doesn't go anywhere by itself. And deleting some code is still an action that a developer has to perform. This is almost always another manifestation of fear towards the code. By noting it will "go away" you defer the decision of what to do with that code. And in software, since teams change all the time, maybe you won't even own the code when the decision is has to be made.

Conclusion

When it comes to evolving your code to the state of the art don't be a blocker. Accept that there are better ways to do whatever you did in the past and always try to be truthful when computing the value proposition when you decide not to upgrade a piece of code. Document that decision well and include a reasoning around what additional conditions would have swung your decision (you owe it to your future self). Schedule a reevaluation of the decision for some time in the future and see if decisions have changed. The worst case is you lose the analysis and the cost of analysis becomes exponential as well. If that happens you will surely cement that decay into place for a long time.

Try to avoid having your ego or the ego's of other make decisions about the quality of code from 10, 5 or even 2 years ago. Tech is one of the fastest evolving industries and it is a self reinforcing process. As individuals it is hard to keep up with this evolution and so we let our own biases and our own history slow advancement when the best thing we can do is getting out of the way of progress. Our experience is still extremely valuable, but we have to apply it towards the right problems. Ensuring that old code stays old is not one of them.

Cleaning up the decay can be part of the culture of your team, it can be part of the process, it can be rewarded or it can just be a dirty job that nobody wants to do. How much decay you have in your system will be entirely dependent on how well you articulate the importance of cleaning it up. The distance between state of the art and state of decay can be measured in months so it can't be ignored. You probably have more decay than you think and if you don't have a process in place to identify it and allocate resources to fix it, it is probably costing you far more than you realize.

Saturday, November 14, 2015

Improving Web Debugging with insights from the Native Debugger

Modern web browsers provide some very capable debugging tools for JavaScript. The ease of use of many of the features matches the ease of use of the language. Most features are accessible and controllable with intuitive UI. However, for the power user coming from native languages, there are some very useful yet missing features. Given the features of the JavaScript language there is the possibility of making even better hybrid use cases, some of which I would like to explore in this post.

My focus will be on breakpoints. Breakpoints in JavaScript debuggers come from either the debugger statement or by selecting lines of code in your script from the UI. Since JavaScript is complicated in that it can perform many operations in a line of code it has taken a while for debuggers to be able to break at the proper time in the evaluation of a statement. Usually it breaks at the beginning and you can kind of step through to get to the point you want. But if you are evaluating many hits of the same breakpoint this can be tedious. An advanced user might use conditional breakpoints, but depending on when the break happens you might find you are too early or too late to evaluate your condition appropriately. All of this aside, we find our way through the muck and manage to make it work.

In native, breakpoints are about code addresses coming from the symbolic debug information emitted with the program. Your debugger will either allow you to set breakpoints based on a direct address, or you can try to evaluate a symbol instead and obtain an address. Many debugging sessions in C++ start with a script or two to set up a bunch of breakpoints for a set of key functions. This is woefully missing from the current set of JavaScript based debuggers.

Another type of native breakpoint that is very handy is the memory breakpoint. You can set a breakpoint for when a value of code is read from or written to or both. This is also based on addresses and so these breakpoints are per instance. When you have thousands of instances this can be extremely handy.

So that is our list. We want symbolic breakpoints and memory breakpoints. We probably also want an easier way to run "debugger" scripts than typing into the console. Now, how native works and how JavaScript works is quite different so we can improve on some of these facilities as well as we describe them. Each section following will be broken down into a basic description, a set of challenges and a set of solutions/improvements. Not every challenge will have a solution provided unless I'm exceptionally on point today.

Symbolic Breakpoints

We want the ability to, given a name of a function and potentially the name of a script or HTML page (module), set a breakpoint on that function when we enter it. This is the same as in the native debugger where I might use a command like:
bp foobarbaz!MyClass::MyFunction
The native debugger also allows for things like deferred breakpoints (bu) which match modules not yet loaded. We can set multiple breakpoints using a wildcard match and the symbolic breakpoint command (bm). Additionally, with some clever scripting we can choose to debug the call in or the return value by using some debugger commands to step out of my function and consult, generally, the @eax/@rax registers.
bp foobarbaz!MyClass::MyFunction "gu; r @rax;"

Challenges

That is a pretty cool feature list. So what are some of the challenges? Well, JavaScript doesn't have symbols. It has function objects and function objects have code. Many function objects don't even have names. So how do you set breakpoints on those things? Interesting problem, we'll get to that in solutions.

Forgetting that many things aren't named, many things are. Every function has a "name" property and so you could imagine the debugger could set a breakpoint on anything with a name. It could also act like the native debugger if more than one exists and give you some sort of handle that you can set the breakpoint on in the case of say 15 functions with exactly the same name.

Modules are rough as well. For instance, you have multiple scripts in a single HTML file or you have dynamically generated names on the server etc... So how to match modules will be an interesting issue.

Speaking of modules, what about script contexts? Isn't that like multiple processes? Have you ever done multi-process debugging in something like WinDBG or VS? Not fun I can say. You set breakpoints in only the current process and when you break in and out of the debugger you have to constantly figure out your context to know what is going on and whether or not you are setting breakpoints in the right space. With things like iframes kind of acting like processes (or maybe physically being in another process in the case of cross-domain iframe isolation) this problem is basically something that has to be solved.

Solutions

Thankfully most of these have some immediate solutions. I'm going to tackle the symbolic issues first. For this to work you have to treat functions like objects. Functions are themselves breakpoint handles so I should be able to set a breakpoint based on a function object itself and this will be a function instance breakpoint.

I shouldn't stop there though. I also want symbolic names to work and even have deferred breakpoints for functions that don't exist yet. So for that, anytime a function is evaluated I will compare its name against my list of symbol expressions. A function might even change its name (not currently, but I don't see any reason why it shouldn't be able to), so while I only have to evaluate these breakpoints against a given function once, if the name changes, I'll have to invalidate the cache and reevaluate on next execution.

This in turn gives us the equivalent of bp, bu and even bm. You could imagine that bm would take a JavaScript RegExp object or string and so I could write some quite complicated function matching breakpoints. This would be pretty cool. To extend this we allow for each command to run on either the execution or return of the function. And of course, we supply appropriate UI so that on the return you can review the return value in a special variable window in a super easy way. Also, if you want to write conditional breakpoints we'd surface the return value in a special syntax so you can query it. Maybe, you want to only break when function foo returns true, the breakpoint/conditional for that might be...
debugger.breakPointReturn(null, /^foo$/, "@retVal === true");
The solution for modules I really don't have a good answer for. What I think it means is that you want the debugger to tell you about these modules, but then you want to either have positive or negative module matching as part of the breakpoint condition. For this reason I would actually make my breakpoint API take both module and symbol as separate values. This is different from native in that it only takes the full string and uses the ! character to split on. Since JavaScript has different identifier characters, building some like this would be somewhat foolish. Also, with WebAssembly in the future, we probably can't make guesses as to what is and is not valid for identifiers.

Cool, so the last solution we need is how to handle this problem of multiple global scopes. This is where the debugger can shine. I should be able to set breakpoints constrained to a specific global scope, constrained to a specific "module" as defined by the html or script name of the document where the source came from, or even set breakpoints generally across the entire environment. I think that in most cases users would be happy with breaking on anything, anywhere given a specific name.

Earlier we allowed you to set a breakpoint on a given function object. That is highly constrained and doesn't have the multiple global scopes problem so long as you tagged the right function.

Finally we get OM function breakpoints out of this as well. Since the OM is defined as a series of JS functions in all browsers now you can use them, or their symbolic names, to match things. We could even allow the constructor name + member name syntax to be very precise, and this would alleviate problems with minified scripts where the names are encoded up to the point that the function is executed or when function objects are used instead of function names. All of that obfuscation is now "debuggable". That would be super cool.

I'm pretty much freaking out this coffee shop with all of my excitement at this point. I really want these features. Just these breakpoints alone would simplify tons and tons of different debugging experiences that I deal with on a daily basis.

Memory Breakpoints

There is already a useful bit of this in the platform today in Observer.observe. This is some ES 7(?) syntax that allows you to be notified when changes to an object are made. I believe that the callbacks run in micro-task checkpoints, but don't quote me on this I could be completely wrong.

Memory breakpoints are bit more intrusive. They break on read or write depending on how they are configured. They also occur at a time where you have both the old value and new value waiting there so you can choose to "stop" the store or "update" the store if you'd like. This can be use to say tweak out a scenario where you believe a wrong value is breaking your site. Simply memory breakpoint, set the corrected value, and let the page continue on to success.

We probably also want to allow instance based or Constructor based syntax for this. Having all constructed instances of a given type have the breakpoint on automatically would be more useful than having to pick out specific instances. This is something lacking in the native debugger unless you do something really clever with multiple breakpoints.

Thankfully the feature list for this one is shorter than the last, but we still have some key challenges to overcome.

Challenges

Memory breakpoints are going to offer mostly performance challenges, but also some challenges to JIT'ed code. While accessing storage locations can be easily dealt with in the interpreter and even some not so well optimized JIT code, once something becomes a simple mov instruction in the assembly with no other overhead, evaluating the memory breakpoint can be challenging. So then the question becomes, do you have to immediately disable a ton of optimizations the moment this is turned on? I think the answer is yes, which is unfortunate. Nothing like super slowing your code when you are trying to repro a bug.

Beyond this JavaScript has fields and properties. Where properties are a bit harder to manage. They do indicate a "slot" on the object that is accessed, but they may also represent a "slot" in your prototype chain instead. Should memory breakpoints only operate on instance fields or should they extend to property getter/setter pairs. I think since you can set function breakpoints, restricting to fields might be okay. Though this does reduce the effectiveness of the feature.

Finally, the Constructor approach can be challenging. Every Global has its own set of Constructors. So if you set a Constructor memory breakpoint it would only apply to instances in the current global. This is similar to using function objects versus symbolic names. You could overcome this by trying to set a symbolic name memory breakpoint on EVERY instance, but that overhead seems way too larger as well. Ultimately the uses cases would define the requirements for this feature and whether or not the costs on runtime are justified by the debugging efficiency gained by the web developers.

Solutions

We inlined most of the solutions, but lets start again. First, we realize that memory breakpoints would likely disable some of the JIT'ed functional execution. This is generally okay since most engines are set up for this type of fallback already. They can fallback to less optimized code or even back to interpretation if necessary. We'd have some target for execution speed, such as no slower than executing at interpreted speed.

We'd restrict to fields since properties are "functions" and not really fields anyway. A property might also end up setting or retrieving from a field so you could still see that access if you wanted to. This would have an impact on OM breakpoints where properties are the way something like Element.className works, disallowing you from watching that property. You could use OM symbolic breakpoints with conditional logic to overcome this.

I think tagging a constructor so that any object returned as a result of new on that function were automatically breakpoint'ed is just super cool. I don't think there are many challenges there once the general infrastructure is in place. I'm curious if others would find that useful though. For browser built-ins the multiple Globals could probably be overcome as well if that proved to be an issue.

Conclusion

Curious if these enhancements would be useful to web developers or not. This could be a case of me solving my own problems. I often use native debugging in order to quickly figure out website problems and reduce the causes for broken websites. In these cases I don't have access to the original source nor can I easily make changes. But also, its a reflection of how we work in native debugging in general. The tools are so powerful that we can basically rewrite the code in place, even down to the assembler, so putting in logging and other types of solutions then rebuilding to try again is completely unnecessary.

If you have any debugging scenarios or stories where these features would help leave a comment or reach me on Twitter @JustRogDigiTec. I'd love to hear that these facilities might be useful beyond my own use cases.