When I tried to filter as you show, I see the same blank lines. I believe this is due to the structure items not having any product or process characteristics (though I could be wrong). However, if in the filter, you select „equals“ and all three selections available on the „values“ box, it then only showed my line items that had a classification. I believe it works as an „or“ function, so if Key OR Safety OR Quality is a classification, it will show. Hope this helps!
While I understand the desire for a simple software program, I didn’t see this as an individual solution request – I think other companies can use this feature if they would like, or they can not. Much like the initial default value of 10 (above) is not used by our company, others could choose to not use default values for the rankings.
However, I do not understand your reasoning for different ratings. If a camera checks a process 100% for errors and stops the operation in station, that would be the same ranking regardless of if it is in a clean room or in a foundry (a score of 3 per the AIGA/VDA standards, as noted in APIS below). If AIGA/VDA went through the effort to create standardized descriptions for the valuations, then it seems normal to be able to apply a standardized ranking to each control method, correct? It would be one less thing for the hundreds of users in my company to have to decide and update while managing their APIS documents. We plan to implement the use of the CARMs server to have standard functions, failures, preventions, detections, etc. and it seems like having standard valuations would make sense.
Generally speaking, we DO only have one prevention and detection action per failure. We record using the best method we have in place. For example, we have training for every step, but some steps also have a camera verification. The training may be a rating of 8 while the camera verification is a rating of 2. In this case, we only record the camera verification because it is implied that we do training for every step. But regardless of the step, the camera verification is always a 2 and the training is always an 8, at least as a default. I would think the software could put in the default values and if the user wants to change them, they can. Much like the software already has a feature of marking all valuations as 10 to start, and then the user has to go modify the rankings.
Thank you for the response. I understand the idea that ratings are tied to the group of actions – but I feel like you could still have default values for individual action types and the system would take the best (or worst?) rating, much like is done if you have more than one failure effect tied to a failure cause and the system takes the highest severity.
We started employing the work around you mentioned, but I worried that users might get confused if the instance count is visible in their display settings as it would then show a different number and an inexperienced user may incorrectly think this is the expected ranking.