Elastic Mint https://www.elasticmint.com/ A refreshing approach to bespoke software development Wed, 27 Nov 2024 08:42:27 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 From Offshore to Hybrid: Ensuring Success with UK-Based Intermediaries https://www.elasticmint.com/from-offshore-to-hybrid-ensuring-success-with-uk-based-intermediaries/ Thu, 13 Jun 2024 10:05:00 +0000 https://www.elasticmint.com/?p=582 One of the most significant changes brought about by the Covid-19 pandemic was the normalisation of remote working. This shift led to evolving attitudes towards offshore software development, with more companies looking to benefit from accessing a skilled workforce at … Continue reading

The post From Offshore to Hybrid: Ensuring Success with UK-Based Intermediaries appeared first on Elastic Mint.

]]>
One of the most significant changes brought about by the Covid-19 pandemic was the normalisation of remote working. This shift led to evolving attitudes towards offshore software development, with more companies looking to benefit from accessing a skilled workforce at reduced rates.

However, what does this mean for UK-based software developers? Is the UK’s software development industry heading in the same direction as manufacturing, where offshoring led to a decline in domestic skills and capabilities? What role will home-grown software development companies play in this new landscape?

Despite the growth of offshoring, this isn’t a story of unmitigated success. Some companies have encountered challenges with their offshore teams and have brought development back home. This is where UK-based companies can step in and offer real value by acting as intermediaries between businesses and their offshore teams, effectively mitigating risk and ensuring quality.

The Rise of Offshore Development

Looking at the rates for software developers worldwide, it’s no surprise that many businesses seek cost savings. According to Accelerance’s 2024 Global Software Outsourcing Rates and Trends Guide, hourly rates for a senior developer are:

  • Latin America: $65 – $74

  • Eastern Europe: $64 – $78

  • South Asia: $40 – $46

  • Southeast Asia: $29 – $42

In comparison, rates for UK-based developers range from $76 to $92 (£60 – £127). Therefore, it’s understandable why businesses seek to hire highly skilled developers in South and Southeast Asia.

However, focusing solely on price can lead businesses to overlook the complexities of developing high-quality applications. The success of any software project depends, to a large extent, on good communication and management. When working with developers overseas, the added barriers of time zones, culture, and language heighten these challenges, making them even more critical.

Books like Erin Meyer’s The Culture Map: Decoding How People Think, Lead, and Get Things Done Across Cultures and Craig Storti’s Speaking of India: Bridging the Communication Gap When Working with Indians highlight how people in different cultures tend to communicate and the importance of recognising this and adapting accordingly.

For example, in some cultures like the US, people tend to communicate directly and say exactly what they mean. In many Asian cultures, however, the communication style tends to be more indirect, with meaning implied rather than made explicit. Too much directness may even be viewed as rudeness.

Consider a manager in a direct culture asking a developer in an indirect culture whether they will be able to complete a piece of work by Tuesday. The developer responds that this could be difficult. The manager could interpret this as the work will be done, albeit with some difficulty, while the developer actually means it won’t be completed on time.

Simple misunderstandings like this highlight the challenges of working with offshore teams and the problems that can arise if they are not addressed.

The Role of UK-Based Intermediaries

UK-based intermediaries can play a crucial role in mitigating the risks of offshoring development by providing senior developers to work alongside offshore teams. By acting as a bridge between businesses and their overseas teams, they can help ensure smooth and efficient operations.

Being UK-based, these developers can visit a company, spend time with key people, and understand how their business operates. At the same time, by working on the same problems alongside overseas teams, they can share knowledge, ensure that requirements are understood, and help maintain high standards of quality.

Returning to the manager’s question to the developer about completing a piece of work, the UK intermediaries can help avoid such misunderstandings. By being involved in the work alongside their colleagues overseas, they will have a good understanding of the current situation and will be able to communicate that back.

Future Outlook

Spending on offshore development is expected to continue growing as companies benefit from a skilled global workforce at significantly reduced rates. At the same time, it is important not to overlook the challenges of working with offshore developers. Businesses need to be careful not to just work with the cheapest supplier, as it can be a false economy. Once the right supplier has been found, a lot of work is still required to ensure the relationship is successful.

UK-based intermediaries can help reduce the risks of working with offshore teams by providing senior developers who act as a bridge between businesses and their teams. The role of intermediaries will become increasingly important in navigating the complexities of global collaboration.

Elastic Mint has worked with UK-based businesses since its inception in 2018. We have provided developers with typically 10 – 15 years of experience who have worked alongside our clients’ teams around the world, sharing their skills and knowledge, ensuring quality control, enhancing collaboration and boosting productivity.

The post From Offshore to Hybrid: Ensuring Success with UK-Based Intermediaries appeared first on Elastic Mint.

]]>
Avoiding the Legacy Trap: Strategies to Maintain Cutting-Edge Software https://www.elasticmint.com/avoiding-the-legacy-trap-strategies-to-maintain-cutting-edge-software/ Mon, 22 Apr 2024 08:40:24 +0000 https://www.elasticmint.com/?p=578 This is the third article in a series about the challenges legacy bespoke software presents for businesses which depend on it to manage their processes. The first article, Bridging the Past and Future: AI-Driven Strategies for Legacy Software Overhaul, highlights … Continue reading

The post Avoiding the Legacy Trap: Strategies to Maintain Cutting-Edge Software appeared first on Elastic Mint.

]]>
This is the third article in a series about the challenges legacy bespoke software presents for businesses which depend on it to manage their processes.

The first article, Bridging the Past and Future: AI-Driven Strategies for Legacy Software Overhaul, highlights how AI is starting to provide tools we can use when modernising legacy codebases, e.g., explaining what a piece of code in an unfamiliar technology does.

The second article, From Old to Bold: Managing Change and Skills in Legacy System Overhauls, touches on some of the human issues, like upskilling teams, while also reminding us of the knowledge that existing teams have even if some of their technical skills aren’t quite up-to-date.

A simple definition of legacy software is that it refers to any software or application that continues to be used past its prime. But what does that mean, and how does it happen?

Note that while this definition does not explicitly refer to the age of the software, age is clearly a factor in deciding whether an application is legacy or not.

Operating systems like Windows and Android have components that haven’t been updated for years and years. Why? Because they still do what they were built to do, and there hasn’t been a need to change them. Having said that, if the technology used to build that code is obsolete and unsupported or if an application only runs on an old, unsupported operating system, then this is a problem.

Having legacy software is fine, right up to the point when it isn’t. Older technologies sometimes are incompatible with newer tools, potentially preventing a business from taking advantage of new opportunities. Security vulnerabilities in legacy applications can introduce a risk from malignant actors. Finally, software developers like to use modern tools, and at some point, it becomes hard to find people with the skills and desire to work on older codebases.

The cost of maintaining legacy software is often estimated to consume between 60% and 80% of IT budgets. A report by the US Government Accountability Office found that 80% of America’s IT budget was spent on maintaining software systems, many of which are outdated. So, how do we get to this point, and more importantly, what steps can we take to avoid our software becoming legacy?

How Software Becomes Legacy

The ever-increasing rate of innovation means it is almost impossible to keep up-to-date with the latest versions of any tool. With minor updates often every week and major updates sometimes only a few months apart, it’s not surprising that applications rarely use the latest versions of any 3rd party dependencies.

Leaving aside the technology aspect, in order to respond to changing market demands, businesses often need to adapt their processes or the data they store. Unless the software is kept up-to-date then it will eventually not be able to support these new processes, thus rendering it obsolete.

Ultimately, most businesses simply view software as a cost centre. Once they have paid for it, finance directors rarely want to keep paying, and CEOs would much prefer to pay for something to support the new thing they want to do rather than update something that mostly works ok. This leads to a lack of investment in existing software, both third-party systems as well as any bespoke software. What they forget though, is that the longer they wait to upgrade to a newer version, the more difficult and more expensive it will be.

Having defined the problem, what steps can we take to avoid our software becoming legacy?

Invest in Scalable Architectures

Building software with longevity in mind involves a commitment to modular design, open standards, and an architecture that anticipates change, allowing for updates and enhancements without foundational overhauls.

Where in the past, developers often built rigid monoliths that were hard to deploy, we have moved to a world of services which can be deployed multiple times in a day. New functionality simply involves either creating new services or updating existing services, ensuring any interfaces remain unchanged.

Make Time to Keep Software Up-To-Date

A key component of future-proofing software is simply keeping existing software up-to-date. Development teams should be encouraged to follow the old Boy Scouts rule of leaving the code better than they found it whenever they open a piece of code. This may involve a small amount of refactoring, renaming variables or methods to improve readability, but just as importantly, updating any dependencies to the latest versions and, when appropriate, updating the language of the component to the latest version. This also applies to tools like databases. These small incremental changes will ensure the software doesn’t depend on obsolete technologies.

Some components are not updated very often. To ensure these stay up-to-date create a maintenance schedule where each component is reviewed and updated at regular intervals, perhaps every 3 or 6 months. This also applies to any third-party tools such as databases or queuing systems. If you are using a managed service from a cloud provider like AWS or Azure, these will be kept up-to-date automatically, but if you are managing a tool like SQL Server or Mongo yourself, it is important not to allow it to become very out-of-date. It is also important to keep track of cloud providers retiring services so you can move off them early if necessary. Doing so will lead to developers not being able to use the latest libraries to interact with them and prevent your team from benefiting from improvements and new features.

Invest in Training and Skills Development

As well as keeping software up-to-date, we need to support developers to stay abreast of industry trends, emerging technologies and best practices.

When times are difficult, training is often the first thing to be cut. However, when it comes to technology, this can be a false economy. While ideas like setting aside Friday afternoons for developers to study may seem expensive, the benefit comes when the learnings make their way into what teams are working on.

This also has the benefit of motivating the team. While software developers obviously want to be paid well, at heart they tend to do the job because they enjoy solving problems and playing with technology. Countless developers attend meetups where they share their knowledge and experience, free of charge, with other developers. Supporting developers with training will help make them loyal and motivated.

Conclusion

In conclusion, while the challenges posed by legacy software are significant, they are not insurmountable. By embracing a forward-thinking approach that incorporates scalable architectures, regular updates, and continuous learning, organisations can effectively mitigate the risks associated with outdated technologies.

Investing in these areas not only preserves the operational efficiency and security of business systems but also ensures that IT infrastructures can evolve in tandem with industry advancements. Moreover, such investments contribute to cultivating a motivated and skilled workforce, capable of driving innovation.

As the technological landscape continues to advance at a rapid pace, the commitment to maintaining and updating software systems becomes not just a strategy for avoiding obsolescence, but a fundamental business practice that supports sustainable growth and competitive advantage.

The post Avoiding the Legacy Trap: Strategies to Maintain Cutting-Edge Software appeared first on Elastic Mint.

]]>
From Old to Bold: Managing Change and Skills in Legacy System Overhauls https://www.elasticmint.com/from-old-to-bold-managing-change-and-skills-in-legacy-system-overhauls/ Wed, 10 Apr 2024 14:36:28 +0000 https://www.elasticmint.com/?p=575 When we consider the task of updating legacy software, our minds often jump straight to the technical hurdles: How will we modernise that outdated API? Is it time to restructure the database for efficiency? And crucially, how do we implement … Continue reading

The post From Old to Bold: Managing Change and Skills in Legacy System Overhauls appeared first on Elastic Mint.

]]>
When we consider the task of updating legacy software, our minds often jump straight to the technical hurdles: How will we modernise that outdated API? Is it time to restructure the database for efficiency? And crucially, how do we implement these changes without disrupting ongoing business operations? These questions, while vital, only scratch the surface of a deeper, more complex challenge.

In focusing solely on the technical, we risk overlooking a crucial component of any modernisation effort – the dedicated team that has been maintaining these systems. Dismissing their deep-rooted understanding of the current system as outdated or irrelevant is not just a mistake; it’s a missed opportunity. These individuals often hold the keys to why things were designed in a certain way, offering invaluable insights that can guide the modernisation process.

However, this raises a number of challenges:

  • How do we harness this wealth of knowledge while simultaneously addressing the skills gaps that inevitably arise with the introduction of new technologies?

  • How should we navigate the resistance to change that can manifest within teams accustomed to certain workflows?

  • How do we avoid potential barriers forming between the existing team and any new people we bring in to help with the transformation?

Incorporating the team’s expertise into the modernisation journey requires a strategic approach, one that values their contribution and facilitates their growth.

Learning from the Team

The first step in any transformation is understanding the current system from the perspective of both developers and users. Many companies have internal wikis or even a collection of design documents. While these can provide a good start, they are likely not up-to-date.

Assuming that’s the case, it can be instructive to look at how new developers are inducted onto the team. What is involved in setting up their computer? What materials are they given to bring them up to speed? How do the current developers explain the system to them? This can be a good way of flushing out some of the complexities of the current system and highlighting areas the team views as problematic.

Following on from that it is useful to interview team members, asking them to describe the system and drawing architecture diagrams based on their knowledge and understanding. While doing this, it’s important not to make judgements but rather show curiosity and ask probing questions to understand the how and, just as important, the why.

Having gained some understanding from the developers, one of the following steps should be to interview different users and ask them to demonstrate how they actually use the system. Ask about how well the system supports what they are trying to achieve and how they would improve it. So often processes are determined by the software rather than what it is people are trying to do. These processes may have been influenced by technological constraints from when the system was developed which no longer exist, and so trying to separate business goals from how the current system works is vital.

Valuable insights can be gained by comparing the users’ understanding of the system with the developers’. Are there areas where they significantly differ? How well do the developers understand what the users are trying to do? Are there areas of tension or frustration between the two groups? Modernising the legacy system is going to require knowledge and involvement from both of them so it’s vital to get them talking to each other if they don’t already. One good way to do this is to set up workshops where developers and users talk through some of the current system’s pain points and discuss how they could be addressed.

Upskilling the Team

At the same time as discussing the current system with the development team, it is important to start thinking about the technologies you wish to use as part of the modernisation. The team may already have some strong ideas around this, but even if they don’t, asking for their thoughts will get them involved and help overcome any resistance to change.

Alongside this, giving the developers time to explore some options and making time for them to learn new skills is likely to enthuse them. One idea would be to ask people to develop some simple POCs to address various challenges. But remember, activities like this should be timeboxed rather than open-ended.

Navigating Resistance to Change

While getting the developers and users involved by asking them about the current system and how to modernise will help, it is likely there will still be reluctance from some.

Reasons for this can include a fear of how the changes might affect their roles, people being familiar and comfortable with how things are now, even if they are inefficient, past experiences of legacy system overhauls going wrong and poor communication about the reasons for the change and expected benefits. There may even be a feeling of implied criticism, especially if someone new comes in and starts asking why things were done in a certain way and starts talking about making improvements.

To overcome any resistance, it’s vital to involve people and explain why now is the time to modernise. Emphasising the value the current system has brought over time and talking about some of the opportunities that newer technology can bring will help avoid people feeling any implied criticism of what they have done in the past. Above all, clear communication about plans, timescales, and even some of the challenges will help prevent the feeling of having something done to you.

Integrating New and Existing Team Members

The chances are that new people will need to be brought in to help with modernising the current system. This may be a few new developers with skills in the technologies that are going to be used, or there may be an external consultancy that will bring in a team to implement the modernisation and will then go once it’s completed.

The combination of existing team members who may have been using older technologies for many years and new developers who don’t seem to have any respect for or knowledge of those older technologies can easily lead to resentment and mutual animosity.

Leaders should remind new team members to avoid any disparaging talk about what has gone before, while at the same time encouraging the existing team to take on board new ways of doing things.

Rather than leaving the existing team to maintain the legacy system while a new team builds something new, a much better option is to have both existing and new developers working together. New developers can benefit from the knowledge and experience of those who have worked on the current system, while at the same time supporting them as they skill up.

Conclusion

Modernising a legacy piece of software involves both technical and human-centred challenges. Developers will often focus on the technical hurdles, but leaders who don’t think about their team do so at their peril.

Developers who have been working on a system for a number of years may not be up on the latest technologies, but they have a wealth of knowledge about how that system works. Tapping into that knowledge while also giving them opportunities to upskill is an integral part of ensuring that any technical transformation is successful.

The post From Old to Bold: Managing Change and Skills in Legacy System Overhauls appeared first on Elastic Mint.

]]>
Bridging the Past and Future: AI-Driven Strategies for Legacy Software Overhaul https://www.elasticmint.com/bridging-the-past-and-future-ai-driven-strategies-for-legacy-software-overhaul/ Mon, 25 Mar 2024 09:44:57 +0000 https://www.elasticmint.com/?p=570 The pace of innovation in software development is not just accelerating—it’s sprinting. Technologies that were on the bleeding edge yesterday are today’s relics, swiftly eclipsed by newer, more agile solutions. This rapid evolution turns cutting-edge applications into legacy systems almost … Continue reading

The post Bridging the Past and Future: AI-Driven Strategies for Legacy Software Overhaul appeared first on Elastic Mint.

]]>
The pace of innovation in software development is not just accelerating—it’s sprinting. Technologies that were on the bleeding edge yesterday are today’s relics, swiftly eclipsed by newer, more agile solutions. This rapid evolution turns cutting-edge applications into legacy systems almost overnight.

Alongside this, businesses are in a constant state of flux, grappling with evolving market demands, regulatory landscapes, and global challenges. They demand software that doesn’t just keep up but anticipates and adapts to their ever-changing needs.

With the arrival of Generative AI, is there a route to using it when updating the legacy software that so many businesses have?

The Problem of Legacy Software

Keeping up with technological change is almost impossible. At present, there is a new version of .Net every year. And don’t get me started with JavaScript libraries, where barely a week goes by without a new library promising to revolutionise the way web applications are built! Businesses that have built software to manage various processes simply don’t stand a chance.

The term legacy software can be used to define any software or application that continues to be used past its prime. Reasons for holding on to such software range from it still performing business-critical functionality to concerns about the cost of redeveloping it even though it no longer fully aligns with how the business works or wants to work.

The longer a business waits to update legacy software, the greater the challenge is. Often the original technologies used to develop the software have been superseded by new tools, and the original developers are no longer available. This can make it challenging to find developers to maintain it. Coupled with the fact that development practices have changed, there may be a fear of making changes which inadvertently cause problems. This leads to the widely quoted statistic that 60% – 80% of IT budgets is regularly spent on maintaining legacy software.

Strategies for AI-Driven Legacy Overhaul

Untangling a system that has been developed over many years and planning a route to modernisation will always require careful thought and analysis, but once you have a plan, AI tools now offer ways to save time.

Tools like ChatGPT, GitHub Copilot and Resharper AI Assistant can take code written in an old language and generate an equivalent in a modern language. You can ask them what a piece of gnarly code does, and they will explain it to you, and then if you want they will generate some tests for it.

Increasingly AI tools are integrated into development environments. Having been trained on many thousands of codebases, they can quickly spot when a developer is trying to write something that has been written before and will make suggestions, potentially saving vast amounts of time.

Most business applications are built around a database. A quick search online reveals a number of AI tools for database design and query optimisation. Besides the big players like Microsoft and Oracle, it will be interesting to see which of the various offerings gain a big following.

Planning for an AI-Driven Modernisation Project

As with any project, planning is vital before using AI to help modernise a piece of legacy software. Ultimately Generative AI is just another tool to help get the job done.
Before starting, it’s vital to engage with stakeholders to understand how the software supports the business now and how they would like things to change. A common pitfall is simply reproducing the existing system but using a new technology. Undoubtedly users will have developed workarounds for areas they feel the system is lacking. A modernisation project is the perfect time to ask why things are done as they are and whether improvements can be made.

Seeing the current developers as part of the legacy system may be tempting. However, they will have a wealth of knowledge about the business and why certain things were implemented the way they were. Quite probably they will have been itching to overhaul the system, and involving them early on is a great way to help them learn new skills while also benefitting from their knowledge.

Finally, it is crucial to develop a roadmap identifying the general order in which parts of the legacy system will be updated. This may involve temporary solutions to problems which are discarded once the modernisation is complete. Assuming the system isn’t very modular, this is an excellent opportunity to bring in modularisation and embrace modern practices like CI and CD, which aren’t always possible with old monolithic systems. This will have the side benefit of helping to future-proof the system as the business continues to evolve.

Conclusion

Overhauling a legacy system is both a challenge and an opportunity. The challenge is finding the right tools and planning how to make the change with minimum disruption to how the business operates. The opportunity is to help drive the business forward by delivering software which is adaptable and ready to face the future.

Generative AI is rapidly providing tools to help in that process, from writing code and automating tests to designing databases and optimising queries. If you are not already using these tools, it’s time to start experimenting to see what they can do for you.

The post Bridging the Past and Future: AI-Driven Strategies for Legacy Software Overhaul appeared first on Elastic Mint.

]]>
The Dangers Of Trying To Future-Proof Code https://www.elasticmint.com/the-dangers-of-trying-to-future-proof-code/ Tue, 29 Jun 2021 09:18:38 +0000 https://www.elasticmint.com/?p=460 Reuse For The Sake Of It One of the first lessons we learn as software developers is about code reuse. Every book, every piece of documentation, every blog breaks down code into small reusable chunks i.e. classes and methods. The … Continue reading

The post The Dangers Of Trying To Future-Proof Code appeared first on Elastic Mint.

]]>
Reuse For The Sake Of It

One of the first lessons we learn as software developers is about code reuse. Every book, every piece of documentation, every blog breaks down code into small reusable chunks i.e. classes and methods. The reasoning behind the choices made are rarely discussed because there is an implicit assumption that everyone understands it. Writing the same code multiple times is considered bad, and so we organise it in such a way as to allow us to use it wherever we want. Even though we rarely plan to reuse the code at the time of writing, we have this thought that someone might want to in the future.

On the surface this seems like a really good thing. We want to make it easy for our future selves to make changes. The problem is that as a species humans are not good at predicting the future. We abstract code out into classes in an attempt to future-proof it, but the imagined future rarely happens. As a result we find ourselves working in code with hundreds of service/utility/helper classes containing single-use pieces of functionality and code that is usually nowhere near as easy to understand or change as we first thought.

Rethinking Abstraction

So what can we do about this? I would suggest that we need to rethink the way we approach writing code, based on the following principles:

  • Organise code by features
  • Write tests alongside our code
  • Don’t abstract until we need to
  • Refactor to patterns only when they become apparent

As developers we have been conditioned to organised code according to its place in the architecture. The project templates in Visual Studio come with folders named with things like Views, Models, Controllers. Developers tend to follow this pattern, adding other folders named Services, Repositories or Utils. Consider what this means for a typical piece of functionality in an API e.g. an endpoint to add or update a customer record.

A Common Approach To Organising Code

In the .Net world, the common approach would be to add the endpoint to a class called CustomerController. This class may call a method in the CustomerService class which will then call the CustomerRepository to write the data to a database. Along the way it may use a CustomerValidator and a CustomerMapper, or even a CustomerHelper.

Each of these classes will be in different folders and the developer will need to hunt through the codebase to find them. Most of the methods will be used in exactly one place, although of course they will be reusable, because someone might want to use them in the future.

Notice how all these classes are named CustomerSomething. This is to help developers find them amongst all the other classes.

Also while it might be clear what a CustomerValidator or CustomerMapper does, what does a CustomerService do? Or a CustomerHelper?

This way of organising code is partly a result of the desire to future-proof and make code reusable.

Organising By Feature

An alternative would be to have a folder called Customers which contains all the functionality related to customers. This where the controller could be, or if using a library like Ardalis ApiEndpoints there might be a class for each endpoint. A good way to start would be to put all the code to process the request into a single method.

In the .Net world at least, I’ve observed that very few developers practice Test Driven Development – although everyone agrees on the importance of tests.

If there aren’t any tests already, this would be the time to write the first one. Or even several! These tests should cover the full functionality of the endpoint.

Having written some tests it may make sense to abstract some of it out into other well-named methods for readability.

Having followed this approach all the code related to adding or updating a customer would be in one place which is easy to find. There wouldn’t be classes vaguely related to customers scattered over the codebase.

As we continue developing some of this code may turn out to be reusable and then it can be abstracted into classes. The tests will enable refactoring to be done safely because they cover the whole endpoint.

The Path to Success

Another word for future-proofing is guessing. When we do this we often get it wrong. We write more code than is needed and abstract ideas prematurely. Code becomes harder to reason about and harder to change. Some people might call this over-engineering. Ultimately this leads to bugs and longer cycle times.

As a reminder here is how I would define the path to success:

  • Organise code by features
  • Write tests alongside our code
  • Don’t abstract until we need to
  • Refactor to patterns only when they become apparent

The post The Dangers Of Trying To Future-Proof Code appeared first on Elastic Mint.

]]>
Coding Tip – Making Code Readable with Extension Methods https://www.elasticmint.com/coding-tip-making-code-readable-with-extension-methods/ Tue, 04 May 2021 08:26:29 +0000 https://www.elasticmint.com/?p=434 Extension methods have been available in C# for many years now. As with all language features they are open to abuse, but used well they can enable developers to produce fluent interfaces and stateless methods for reuse. One of my … Continue reading

The post Coding Tip – Making Code Readable with Extension Methods appeared first on Elastic Mint.

]]>
Extension methods have been available in C# for many years now. As with all language features they are open to abuse, but used well they can enable developers to produce fluent interfaces and stateless methods for reuse.

One of my bugbears is code that is hard to understand. Or more accurately code I have to think about in order to understand.

Configuring Services in Application Startup

Take the Startup class in ASP .Net Core APIs. In the ConfigureServices() method there is often a lot of code adding services to the IServiceCollection to enable Dependency Injection. This code can be overwhelming when you first look at it. E.g.

services
    .AddScoped<IMyService, MyService>()
    .AddScoped<IMyService2, MyService2>()
    .AddScoped<IMyService3, MyService3>() 
    ...

I like to organise code according to the functionality it supports. Many of these services support a specific area of functionality. A well-named extension method placed in the folder for a particular piece of functionality makes it easy to group configuring services it relies on together. The intention of the code in the ConfigureServices() method then becomes much easier to understand. E.g.

services
    .AddFeature1()
    .AddFeature2()
    .AddFeature3()
    ...

 

Dynamically Building Database Queries

Another area I find extension methods useful is with dynamically building up queries using the Elasticsearch Nest client or the MongoDB Driver. Very often, I have to build a query based on an optional parameter in the request. This can lead to lots of if statements checking for the parameter. If the parameter is present the query is then extended. E.g.

var filter = Builders<MyCollection>.Filter.Empty;

if (property1!= null)
{
    filter &= Builders<MyCollection>.Filter.Eq(x => x.Property1, property1);
}

...

Very quickly this code can become so long and unreadable that it’s hard to understand what the intention is. Moving this code into extension methods leads to code like this where the intention is much clearer.

var filter = Builders<MyCollection>.Filter
    .Empty
    .ByProperty1(property1)
    .ByProperty2(property2) 
    ...;

The obvious benefit to all this is that it helps the next person who works on this to quickly understand what the code is trying to do. It’s easier to make changes and there are likely to be fewer bugs.

The post Coding Tip – Making Code Readable with Extension Methods appeared first on Elastic Mint.

]]>
Choosing the right tool – Elasticsearch https://www.elasticmint.com/choosing-the-right-tool-elasticsearch/ Tue, 13 Apr 2021 08:00:46 +0000 https://www.elasticmint.com/?p=421 Choice in the database market has never been so great, especially when it comes to document databases. Should you go with MongoDB or Amazon’s DocumentDB? Is Couchbase the right fit for you? One solution that has become popular is Elasticsearch. … Continue reading

The post Choosing the right tool – Elasticsearch appeared first on Elastic Mint.

]]>
Choice in the database market has never been so great, especially when it comes to document databases. Should you go with MongoDB or Amazon’s DocumentDB? Is Couchbase the right fit for you?

One solution that has become popular is Elasticsearch. At first glance it’s another NoSQL database storing data as JSON documents. If you load their Shakespeare sample dataset into Elasticsearch you can write queries like the following:

GET shakespeare/_search
{
    "query": {
        "term": {
            "play_name": {
                "value": "Twelfth Night"
            }
        }
    }
}

This will return all the documents relating to Twelfth Night. But this is something any document database can do, so why would you choose Elasticsearch?

The answer comes when you start to understand the technology behind it. Elasticsearch is a distributed full-text search and analytics engine backed by Lucene. Or in other words it allows you to search through your data in a very similar way to any search engine, returning results ranked by how closely they match the search term. Rather than asking for all the characters within a play like Twelfth Night, the question it can answer is which play has a phrase a bit like “when shall we meet again”.

The query for that question looks like this:

GET shakespeare/_search
{
    "query": {
        "multi_match": {
            "query": "when shall we meet again",
            "fields": [
                "text_entry"
            ]
        }
    }
}

Unsurprisingly, the highest ranked result is Macbeth with the line “When shall we three meet again” which has a score of 23.668333. However, Romeo and Juliet has the line “Farewell! God knows when we shall meet again” (score 21.306107), and Troilus and Cressida has the line “When shall we see again?” (score 18.1719915).

In a business setting a useful piece of functionality might be a free text search which allows users to search through various pieces of data. E.g. a salesperson remembers talking to a client about an issue and adding some notes to their profile, but they can’t remember which client it was, or perhaps searching through an archive for documents containing certain keywords or phrases. Elasticsearch is a technology that could help with this.

At Elastic Mint we pride ourselves on helping customers choose the right technology for their problem. How can we help you?

The post Choosing the right tool – Elasticsearch appeared first on Elastic Mint.

]]>
Why we sweat the small stuff https://www.elasticmint.com/why-we-sweat-the-small-stuff/ Mon, 21 Sep 2020 19:39:32 +0000 https://www.elasticmint.com/?p=232 Have you heard the phrase “Don’t sweat the small stuff”? It comes from a book by Richard Carlson called Don’t Sweat the Small Stuff and It’s All Small Stuff all about stopping little things in life driving you crazy. In many areas … Continue reading

The post Why we sweat the small stuff appeared first on Elastic Mint.

]]>
Have you heard the phrase “Don’t sweat the small stuff”? It comes from a book by Richard Carlson called Don’t Sweat the Small Stuff and It’s All Small Stuff all about stopping little things in life driving you crazy.

In many areas of life this makes a lot of sense, however when it comes to writing code we’ve found that the “small stuff” really matters.

The problem is that after you have solved someone’s problems and written the code that does what they want, at some point there will be a new problem. And then someone, quite possibly not you, is going to have to look at what you have done and work out what needs to change in order to solve this new problem.

Writing code is hard. Organising your thoughts so that it makes sense to other people is even harder. The next person to read this code is going to have a hard enough time understanding it, so the least we can do is sweat the small stuff, care about the details, and do whatever we can to make their life easier.

Of course, lots of companies have coding standards and that’s all well and good, but we believe in taking things beyond that and have a philosophy of making code as maintainable as possible for the next person. Among other things this involves thinking about the following:

  • Does the name I have given this variable/method/class accurately describe its purpose?
  • Is it easy to look at a method and understand what it does?
  • Is the important information regarding a method/class easy to find?
  • Are there any abbreviations or spelling mistakes which might make it harder for someone to understand?
  • Does a class do one thing well or has it become a dumping ground for lots of unrelated functionality?
  • Are names consistent throughout the code?

Let’s work through an example of how we might apply some of these questions to the issue of positioning parameters for a method.

How often have you seen a method that looks something like this?

private static ReturnType DoSomething(Parameter1 parameter1, Parameter2 parameter2, Parameter3 parameter3, Parameter4 parameter4) 

For the purposes of this example let’s not worry about what the method does, the parameters or the return value. Let’s just assume there is a good reason for all the parameters. The problem is that this is a really long line. The chances are that in order to read all the parameters you will need to scroll across in your editor. And you can guarantee that the parameter you are interested in will be the last one!

So what can we do? How about splitting some of the parameters onto separate lines?

private static ReturnType DoSomething(Parameter1 parameter1,  
    Parameter2 parameter2, Parameter3 parameter3, Parameter4 parameter4) 

This is much better. At least everything is visible without scrolling, but those three parameters all on the same line are still hard to read. The middle one especially can easily be lost between the other two.

private static ReturnType DoSomething(Parameter1 parameter1,  
    Parameter2 parameter2,  
    Parameter3 parameter3,  
    Parameter4 parameter4) 

Now the parameters have been split on to their own lines. However, the first parameter does not line up with the other parameters. This might mean that someone completely misses it and doesn’t realise it’s there. Potentially this is confusing.

private static ReturnType DoSomething( 
    Parameter1 parameter1,  
    Parameter2 parameter2,  
    Parameter3 parameter3,  
    Parameter4 parameter4) 

Finally, each parameter is aligned and on its own line. When someone reads this method, they can easily see all the parameters. If necessary, they can easily add or remove a parameter. The important information is easy to find and someone should be able to quickly understand the parameters required for this method. Now all someone needs to do is understand the code inside the method!

Now some people might look at that example and argue that it’s a lot of fuss about not very much, and taking that code in isolation they might be right. But in a codebase with many hundreds or even thousands of lines of code, applying this kind of thinking at least shows the next person that care and thought has gone into it. It can make the difference between venting frustration about “whatever idiot wrote the code” and focusing on what the code actually does.

In short sweating the small stuff makes the next person’s job easier and that person might just be you!

The post Why we sweat the small stuff appeared first on Elastic Mint.

]]>
Do we still need orms? https://www.elasticmint.com/do-we-still-need-orms/ Fri, 02 Nov 2018 19:44:38 +0000 https://www.elasticmint.com/?p=240 Do you remember the first time you used an ORM (Object Relational Mapper)? For me it was something I wrote to make it easier to map data related to testing electronic devices into a SQL database. Different devices generated different … Continue reading

The post Do we still need orms? appeared first on Elastic Mint.

]]>
Do you remember the first time you used an ORM (Object Relational Mapper)? For me it was something I wrote to make it easier to map data related to testing electronic devices into a SQL database. Different devices generated different test data, and so I used a convention to map the property names on the classes to the SQL tables and columns. At the time I didn’t even know what an ORM was. I was just trying to avoid writing repetitive code.

Sometime later I met NHibernate. Once I got past the horrible xml configuration for the mapping I really liked that I could work with objects in code and not really think about the database. Certainly, I didn’t miss writing code to use ADO .Net like that below.

using (var conn = new SqlConnection(connectionString))
{
    var command = new SqlCommand("UPDATE mytable SET ...", conn) {CommandType = CommandType.Text};
    conn.Open();
    command.ExecuteNonQuery();
}

Almost as soon as I started using NHibernate I started having problems. The issue was related to the lifetime of the ISession and ISessionFactory. Somehow my colleagues and I misunderstood that sessions were not meant to be long-lived. The other problem was lazy loading. So, although the code to access the database became quite easy, it didn’t perform very well.

However, I still liked the idea that I could create domain objects with functionality on them and that storing them in a SQL database was relatively simple. The database became just a service, and we managed to keep business logic inside the application rather than it sometimes finding its way into stored procedures.

Over time other ORMs became popular. Entity Framework took over from NHibernate, and if you wanted something closer to my original use case there was always Dapper.

Why Would you Use an ORM?

For me the benefits of an ORM have always been related to making it easier to access the database:

  • No need to write repetitive boiler-plate data access code
  • Business logic can be placed on domain objects which can be easily stored in and retrieved from a database
  • Object tree can be retrieved all at once, or with lazy loading
  • Can use LINQ to access data
  • Database tables/columns can be automatically generated
  • Easy to use for non-database experts
  • Protection from SQL injection

These benefits though come with a cost.

The first cost is obviously performance. Any library that is generating SQL is obviously doing extra work and will have an effect on performance. The SQL generated is almost certainly nothing like SQL you would write yourself. This doesn’t mean it’s bad, but it highlights that using an ORM means that you are delegating responsibility for the SQL used to interact with the database to a tool. You need to trust that it generates efficient, performant SQL, and you accept that you can’t change it if you’re not happy.

Secondly, in order to gain the most benefit from an ORM, you need to understand how things like lazy-loading work. One of the most common problems is the N + 1 issue. This is when a parent object has been loaded into memory which contains a collection of child object e.g. an order with order lines. If you iterate across the collection, then each one is loaded in turn from the database, leading to the cost of opening and closing a database connection and running a SQL query for each record. The larger the collection is, the bigger the effect this will have.

To solve this you can eagerly load the collection, but this can also be inefficient. Internally the ORM will generate a SQL JOIN and the rows returned from the database will each contain all the data from the parent record. So not only is more data transferred, but the ORM also has to sort out the duplicate data and split it into the appropriate objects. Eagerly load several child collections and you can see how this quickly becomes a complex, time-consuming operation.

Of course, there are strategies you can employ to get the best out of an ORM, but the question has to be asked, at what point do the trade-offs stop being worthwhile?

When to Use an ORM

A lot has changed since I first used NHibernate. Increasingly, I come across the opinion that it’s just not worth using an ORM. Perhaps a lot of that is related to the changes we’ve seen. Rather than monoliths, we tend to see enterprise applications split into smaller services. While we still often have a relational database as the source of truth, we very often copy data into other data stores, e.g. MongoDB, Elasticsearch, DynamoDB, depending on how we wish to use that data.

I can still see a use for ORMs, but I would suggest it is not dissimilar to VB6 which many developers had a love-hate relationship with. VB6 was designed to enable rapid development of relatively simple applications. And it did a great job at that. The downside was that if you took it too far beyond that use-case you would hit issues e.g. the maximum number of controls on a form.

For small, simple applications where performance is not crucial, I think an ORM can be fantastic. A modern ORM like Entity Framework Core allows you to rapidly develop your database functionality enabling you to focus more on your business logic. For many small line-of-business applications I have worked on, an ORM has met all their needs.

When developing enterprise-level applications, such as those behind successful online retailers like ASOS or Just Eat, performance becomes a lot more important. These applications have to handle many thousands of requests every second and almost certainly will be served much better by hand-crafted SQL, probably in stored procedures. The way in which data is stored and retrieved will tend to be optimised to avoid performance bottlenecks and may involve the use of other technologies, thus rendering the question of ORMs moot.

The post Do we still need orms? appeared first on Elastic Mint.

]]>
Bespoke software https://www.elasticmint.com/bespoke-software/ Mon, 24 Sep 2018 19:37:50 +0000 https://www.elasticmint.com/?p=227 What is bespoke software? Bespoke is just a fancy word for custom, right? Well, yes. And no. According to putthison.com the word ‘originated in shoemaking, but gained in popularity through custom tailoring in England, where lengths of cloths were said to be … Continue reading

The post Bespoke software appeared first on Elastic Mint.

]]>
What is bespoke software?

Bespoke is just a fancy word for custom, right? Well, yes. And no.

According to putthison.com the word ‘originated in shoemaking, but gained in popularity through custom tailoring in England, where lengths of cloths were said to be “spoken for” or “bespoken” by another customer.’ [1] ( https://putthison.com/the-overuse-of-the-word-bespoke-many-words-are/) There are several levels of custom-made clothes:

  • Made-to-order – only the materials are customised
  • Made-to-measure – the materials and the cut are tailored based on a single fitting
  • Bespoke – garments are made through a series of fittings

Translating this into software development, we can think of ‘made-to-measure’ as being like a fixed-scope, Waterfall development process, where we get the requirements up-front, build the software and then deliver it. This process does not cater for any changes to the original specification.

I see bespoke software development as an iterative process, requiring an intimate understanding of the problem and a strong relationship between the developer and the customer. If we start by delivering a minimal viable product, customers get to use the software as soon as it is ready – to try it out for size so to speak. Through a series of iterations and feedback, we can amend, add and remove features until we have software that works as we want it.

As with tailoring, the bespoke software may take a little longer to produce and require a little more active involvement of the customer during production, but the result is something that is precisely the right fit.

The post Bespoke software appeared first on Elastic Mint.

]]>