Author Archives: gripdev

Fixing ASPNET Production Issues by adding custom data to App Insights logs

Debugging issues in production is hard. Things vary – seemingly identical requests to the same URL could be made but one succeed and the other fail.

Without information about the context and configuration it’s hard to isolate issue, here is a quick way to get more of that context.

The key is having the data to understand the cause of the failures. One of the things we do to help that is creating our own ITelemetryInitializer for application insights.

In this we track which environment the request was made in, the cloud instance handling the request, code version, UserId and lots more.

This means, if an issue occurs, we have a wealth of data to understand and debug the issue. It’s embedded in every telemetry event tracked.

Here is an example of a Telemetry Initializer which adds the UserID and Azure WebApps Instance Id to tracked events, if a user is signed in. You’ll find, once you start using this you’ll add more information over time.

To get it setup register it, like this, in your Startup.cs:

TelemetryConfiguration.Active.TelemetryInitializers.Add(new
AppInsightsTelemetryInitializer());

Go do it now, seriously you’ll need it later!

Building a SaaS App from your Boxed Software – Pricing, Profit, Architecture and Performance

While working as a consultant I spent time with a number of large companies helping them move from a tradition boxed software model to providing SaaS applications.

One of the first hurdles to overcome is the pricing model for their new SaaS app and the cultural change to pricing SaaS apps vs traditional boxed software.

In the old world the sales, product and marketing team would have a discussion about the value proposition of the product, take into account cost of creating the code, understand the price elasticity and market demand.

Once that’s done the dev team would produce a machine minimum spec required to run the solution and burn the code to a disk or zip up a download for the customers.

At that point all is done, trouble would brew if future versions needed significantly higher spec hardware. That might put pressure on a dev team to optimize but the pricing decision wasn’t part of this process.

Now let’s look at this is a SaaS world, the pricing discussions has to take into account the running costs of hosting the solution. There are two approaches at this point, ignore this completely and continue down the old path OR treat this as a lever you can control and use it to make the product more successful.

Let’s take two examples of decisions that now, in the SaaS world, have a huge impact on the cost of running your solution and, ultimately, your profit margin.

 

Multitenant vs Single Tenant

 

Maybe you have an existing boxed product and you want to make a fast move to the cloud to offer a no-hassle hosted option to your customers. The risk of doing this slowly is that you may lose market share to others who move first or worse still, maybe new entrants who aren’t burdened by their existing code and have written for SaaS from the start.

You could start a complete re-write, creating a multitenant solution with low hosting costs, built for the SaaS world.

The opportunity cost here is the time and resources needed to execute this strategy and the % risk of failure.

Alternatively, you could create a single tenant solution, using your existing code. Maybe creating IaaS VMs and masking the complexity from your customers – offering a SaaS solution without large scale re-writing of your existing product.

The trade-off here is the higher hosting cost associated with a single tenant solution. Many use this as justification for starting a ground up re-write for their software, for some this is correct but for many this is a failure to account for the costs/risk combo involved in this endeavor.

So how does this affect pricing and profit? Well the multitenant rewrite involves high expenditure over an indeterminate period of time in hope of future profits. The single tenancy example sacrifices current profit margin to get a foothold the market. Depending on the market and your business either of these could be the correct option.

 

ROI and Solution Performance

 

In the boxed software world – code quality, performance and efficiency were focused on ensuring good user experience and happy customers.

That’s not the case anymore. If you’re moving to SaaS, they directly affect your profit margin. Releasing inefficient code or designing inefficient architectures costs you money.

So what’s the best response? Go nuts on performance testing, micro optimize every line of code and jam as much on one box as possible?

Well yes and no, just like with multitenancy you can use this as a lever. Maybe you’re losing ground to a competitor, to close the gap you need to increase the speed with which you create new features. You know the margins you have to play with and the future delivery roadmap so you can take a calculated risk to lower the focus on performance in favor of shipping the new features needed to close the gap.

Actually it may transpire that the uptake on the feature is very small but it helps your product compete. At this point you may decide not to revisit the feature and improve its performance, due to the low usage. In this case you’ve saved time and effort which would be exerted optimizing unnecessarily.

Again, we’ve minimized upfront investment of resources in favor of potentially higher costs while you get solid data to justify the time and effort required to optimize the solution.

 

Summary

Successful SaaS isn’t about technology, business or operations in isolation. It’s about all of these working together. It’s about making decisions with their proper context. It’s about having meetings where an key decision maker, dev lead, product manager and sales lead all take time to understand the implications of their actions and explain them to each other. It’s about trade-offs, margins, code performance, velocity, hosting costs, market research, competitors … you get the gist.

Above all, it’s about being tactical. Be aware of what decisions cost, monitor the outcome and take calculated risks.

 

 

 

Using App Insights Analytics Query Language to Make Better Decisions

So we’ve been working on an App recently to complement our website. One thing I strongly believe in is data driven design or put simple: “Don’t guess gather evidence”.

In our current site we’re using App Insights to capture usage, performance and general telemetry from the platform. This gives us a wealth of knowledge which we can query when making business decisions.

Let’s jump in, I’m going to be using the new Analytics platform and its query language to understand more about our mobile users, specifically iOS, as an example.

First up I’m going to run a query to get the numbers for each client OS we see on the site.

100116_1019_usingappins2

Now, as you can see in the results, iOS is split over various versions. Let’s narrow down the query to look at just iOS..

100116_1019_usingappins3

 

Let’s remove the grouping and see if we can find out what we’d lose by just targeting iOS9 upwards.

First up what are the total numbers for all iOS visitors?

100116_1019_usingappins4

What about just iOS9?

100116_1019_usingappins5

In our case this shows that 89% of our usage is from iOS9 and above. We haven’t taken into account time here so all the older iOS usage could be from way in the past. Let’s take a quick look at that.

100116_1019_usingappins7

So no feasible trend to be seen here, 89% stands.

At the moment we’re just looking at requests, so this number could be massively off if one user came and hit the site a lot on a single phone. We’ll use the dcount operator to do a distinct count by sessionID. This will only count each user session once.

This query showed that this wasn’t the case and the numbers we had are valid, so we can rule out a single user doing a lot of browsing and throwing off our stats.

At this point I realised that we’re using some SPA functionality and server requests doesn’t really map to pageviews so I switched to looking at page views to compare the numbers.

Sure enough this made a difference but the rations between the numbers where similar, 1 request generated x number of page views, where x is fairly constant.

Let’s next look at the split between device types, for this we’ll use the reduce function. It groups together similar variables to make things simple. For example, the jumbled list below:

Becomes..

This is great to help understand patterns in data without having to do lots of where/group by’s. But in this case it’s a bit too extreme, I’d like “other” broken down a bit more.

By playing around with the “threshold” value I can make this happen. After tweaking the “threshold” value I found that 0.2 did the trick for me, I now have a breakdown of iPad vs iPhone

Roughly this showed a 50/50 split in iPhone vs iPad traffic for us over the time period.

Summary

89% of our iOS users are on iOS 9 or above and we have a 50/50 split of traffic between iPad and iPhone users. Now when making product decisions we can use this data to drive what and how we target the platform.

Obviously this is only a high level overview of the numbers we ran but hopefully it serves to illustrate some of the functions in analytics and how they can be used to inform better development decisions.