Are We Wasting Our Time Quality Checking Models?

Model Quality Checks

To be clear models should be checked, but at present, we spend a lot of our time doing the checking. The checks should be automatic.

Model quality checks provide a means of assessing the information within a model. Is the information correct and aligned with the project requirements? Is the model fully coordinated, free from unacceptable clashes and buildable? But, existing workflows require manual checks with assistance from software applications. Why not turn this around and have automated software checks, with minimal human interventions.

There are a wide variety of checks that can and should be provided, such as:

  • The structure of a file, especially when it comes to an IFC model file
  • The inclusion of data, has the data that has been requested through an EIR been provided
  • The format of the data, is the format of the included data correct
  • Data verification, is the information correct (for example, has a space been correctly classified or are all GUIDs unique)
  • Model detail, has the model been developed to the correct level of detail (neither too much or too little detail)
  • Direct geometric clashes, do all the components within a model fit together correctly
  • Geometric coordination, are the geometric components of a model organised in a sensible way (is a column located too close to a door to cause problems with the movement of people)

At present to carry out a full suite of model quality checks requires specialist software combined with human interventions (with specialist knowledge) and a large dose of subjective opinion.

Model quality checks provide amazing value for money by resolving design issues before they become physical issues. But is there a better way of checking models?

Why should a model check need human intervention?

Algorithmic Checking

For model checking to become automated (or at least largely automated) each check needs to be broken down into a set of instructions or rules.

These rules need to be unambiguous with defined outcomes. This will enable algorithmic checking, developing a set of rules within a software application.

The complexity of the checking rules will vary depending upon the type of check. Some, such as are the GUIDs unique, will be very simple to implement. But, others will be much more difficult and may require

The model checks themselves will be dependent upon other model checks. With higher level, and generally more complex checks, requiring some basic information (which itself needs to be checked). And so we start to build a hierarchy of checks.

As an example, an accessible WC requires a minimum plan area (to comply with English building control). For this to be assessed, the software application needs to know what spaces are accessible WC. Leaving aside machine learning and AI, the software application will not know, without being explicitly told what spaces are accessible WCs.

You could argue that you could develop a set of rules for assessing spaces as being accessible WCs, such as the space containing a WC, drop down handrails and an emergency call. But, this would require these items being within the federated model before the space can be checked by which time the floor plates should be frozen.

Widespread use of a structured classification system would facilitate many of the higher level checks. But, the classifications themselves would require a checking. Are all spaces, and objects classified? Is the correct classification system being used? Are the classifications referenced provided in the correct format?

Complex Checks

But, what about some of the more complex checks. The ones that really provide value and benefit. As many of the simple checks are lower level assessments that enable the more complex and value-added checks. And complex checks are not confined to direct geometric clashes. Indeed using classification rules some direct geometric clashes would be acceptable, small diameter pipework clashing with internal walls would be acceptable.

Let’s work through some possibilities for complex checking.

Model Detail

Model detail consists of two distinct components. The graphical components being one part and the data (often referred to as level of detail LOD) or information associated with graphical components being the other part (often referred to as level of information). The two components are linked but can be effectively assessed separately.

Level of Detail (LOD) – The graphical or visual elements of a model
Level of Information (LOI) – The data or information contained within a model and associated with distinct items, such as space or an object

LOI Firstly, the data. If each item within a model was correctly and rigidly classified (Uniclass 2015 would be an ideal candidate for the classification) this would provide the foundations for automated checking of the LOI.

The data required for each item at each project stage would be predefined. This would be a big task to assign all the Uniclass product, space and entity codes for each project. An initial generic assignment would reduce the administrative burden. Users could amend the individual requirements to provide project specific assessments.

The algorithm would check, for each item within the model, that the data required for the current stage has been provided. This by itself is quite a simple check. With missing data flagged via a dashboard warning enabling access to the detail of the missing data.

The data itself would be subject to the earlier lower level checks. With each parameter having its own predefined checks. The classification codes would be checked for the correct format. An installation date would be checked if it was within the predefined stage 5 (construction) dates.

Of course, if machine learning and AI was integrated into the checks, an assessment could be made as to what an object was representing. This would provide a means of checking had the correct classification code been applied and add in a classification code where none had been provided.

LOD As for the graphical model content, this at first thoughts appears to be a more difficult thing to check. Every component could be different, how could varying and different objects be checked?

The approach as before is to break down the checks into what can be checked, rather than focusing on what can’t be checked. Automated model checks would evolve over time, lessons learnt would enhance the checks and every new algorithmic check that is developed would add to the value of the results.

A stage 2 model would typically consist of the facility with floors and spaces. The graphical check for stage 2 would be that the facility, floors and spaces have been provided within the model. This could be augmented with some other checks, such as enclosing walls to all spaces and spaces within acceptable bounds. The acceptable bounds would be related to the space classification, a store room would have smaller minimum dimensions than a classroom. The model could also be checked for too much detail, have items been provided that not be expected at this stage.

Stage 3 would start with similar checks. Has the model been augmented with additional components such as internal doors and structural elements? But then the checks would be enhanced such as assessing if the door clashed with an intermediate wall, restricted flow of occupants and was within sensible dimensions.

As the design continues to develop it would be useful define the detail required for different systems (we will consider a system to be a group of interlinked or similar components). For example, maintainable systems (such as mechanical ventilation) would require more detail than systems that do not require maintenance (such as internal partitions). Although, every object should at some point during design development have its external faces or surfaces correct detailed. An internal partition wall would not necessarily need the wall build-up detailed, the external faces of the wall in the correct location could be sufficient if augmented with data such as fire rating, acoustic rating, wall type and product manufacturer. Whereas a major item of plant such as an air handling unit would require additional detail beyond the bounding surfaces. The locations of duct and pipework connections should be correct and maintenance access locations should be correctly defined to ensure that sufficient access is provided.

The Future of Checking?

Automated checks would be fully dependent upon the mandatory use of consistent classifications for all items within a model. But, consistent classifications should be something that all designers provide (although, I realise that in reality, it is the exception to receive a model with consistent classifications).

Automated checks need consistent classification

Model checks are only limited by our ability to develop them

Automated checking is a reality now. Systems are being developed on existing and robust open source cloud-based model servers, such as BIMserver. Each and every model update is checked as it is uploaded.

The benefits of a high quality, data rich and fully coordinated model are huge. The benefits will also increase as the checks become increasingly sophisticated and reliable.

High quality, data rich and fully coordinated models will provide huge benefits

Ian Yeo ian.yeo@bimsense.co.uk

Still not found what you're looking for?

Talk to us