Rise of the robots? Don’t forget the manual override

Bovill

‘Robo-advisers’ are filling the perceived advice gap in the UK. The FCA’s recent review: ‘Automated investment services – our expectation’, confirmed the anticipated growth in online discretionary investment management (ODIM) and auto advice. But what about the risks inherent in an automated approach; how should Compliance influence the design of the service to mitigate these risks?

Suitability assessments

Whether you are delivering a personal recommendation or making a decision to trade, face to face or through automated channels, you must still undertake an adequate suitability assessment.

During the FCA’s recent review, it found many automated investment services firms did not adequately evaluate a client’s knowledge and experience, investment objectives and capacity for loss in their suitability assessments.

Often firms are relying on a client to complete a ‘self-assessment’ or simply ‘tick a box’ to confirm that for example, they have investment experience. Some firms have devised clever ‘educational tools’ such as videos which explain risks to help client understanding. However, if you can simply let the video run whilst making a cup of tea then how can you confidently confirm the client understands?

It will always be more difficult to complete these crucial assessments when you’re not face-to-face with the client, that’s why it’s important to work with those developing the system to allow the client to have as much information as possible before making a decision.

Product governance

Now that’s out there, let’s start from the beginning. It is imperative that Compliance are involved from the outset in any design of a robo-advice service offering. The product governance obligations are clearly set out within the product Intervention and Product Governance Sourcebook (PROD).

Good product governance should ensure that the new service meets the needs of the specific target audience. Thorough market research is a good way of identifying a suitable target audience. Questions should be asked as to what type of service is required? Does your target market cater for those looking for a full discretionary service, or those that simply want to remain informed and aware of the investments being recommended?

Clear, fair and not misleading

The initial information provided to potential clients to explain the nature of the service on offer is key to generating the right client outcomes. In the recent FCA review, most of the firms reviewed were not clear enough regarding what service was being offered (for example, discretionary or advisory).

Clients don’t understand the difference between ‘simplified advice’, ‘focused advice’ or even ‘one-off advice’. By not clearly explaining to the client upfront what type of service is being offered, this could have an impact on whether the end service would be suitable for their needs.

The language used when communicating with clients needs to meet the usual ‘clear, fair and not misleading’ principle. How this is presented to users will affect the outcomes to the client. Information about the firm and its services, associated charges and of course the risks associated with the service (does ‘Capital at risk’ ring a bell?) should all be transparent – and jargon free – when disclosed to clients.

Kick-outs

Once that’s all decided, you need to front-load your system with hard limits – clear questions with black and white answers to kick out clients whose needs cannot be catered for using this automated service. Questions about investment amount, time horizon, level of personal debt, size of emergency fund and so on can all be used to weed out people who have more complex needs and so are unsuited to this type of service. These eligibility criteria are the most important to get right – thereby avoiding the more obvious mis-sales to folks who really shouldn’t be investing in the first place.

If the client has got this far it’s time to start the detailed profiling. With face-to-face advice, client profiling is a discursive and iterative process. But we don’t have these luxuries in the digital world. So you have to be smart about how you gather the more subjective information. There are plenty of tools you can adapt to assess attitude to risk and capacity for loss. But all the inputs have got to join up; the algorithm – and the assumptions and methodologies behind it – have got to replicate the intellectual assessment normally undertaken by the adviser to deliver a suitable recommendation.

Manual override

It might fly in the face of the concept of automated advice, but a little human intervention at some point in the process might go some way towards mitigating risk. This could either be some kind of helpline (manned by qualified advisers) or perhaps a step in the process where the suitability of the proposed recommendation is checked by an adviser before the client commits and the advice is implemented.

Testing, testing and more testing

Once you’ve carried out a significant amount of back-testing, stress testing, parallel testing and pilot testing you should know whether it’s working or not; and only then can you move to the live environment. This is when your on-going monitoring will start. But what will that look like? What kind of sampling should you do? If the methodology was sound at launch, can you assume it’s still alright? Do you need to do any monitoring at all? Of course you do but how do you make it risk-based when the assumption is that the majority of the business written will have similar characteristics?

The bottom line is there are no short cuts when it comes to ensuring the robustness of an automated advice process – from initial design, through implementation – to business as usual. You’ll never eradicate the risk of systematic errors but with some careful thought and manual intervention you should be able to reduce them to manageable levels.

Menu