AQuA Round Table: Thursday October 12 2017

Register for the AQuA Round Table

The next AQuA Round Table will take place on Thursday October 12th 2017 in London. Please complete the form below to register your place, the round table is for AQuA Members and Endorsed Members.
 

Your name *

 

Your email address *

 

Please rate the following issues below, in order of how important it is to you that we discuss them in the Round Table.

1 = most important and 9 = least important
*

Level of importance
AI – how do we approach testing
AI aspects of apps Artificial intelligence and machine learning bring a new twist to testing.  The key difference from traditional software is the ability to flex and change the logic, based on experience and past events. Is this different to setting up a known data stating position and having predictable results? How can we adapt our testing methodologies to learn to expect new predicted outcomes? Does an AI app require an AI test?
API testing
Traditional interface testing often uses replacement stubs to imitate the external system calls. Are today’s APIs any different to the external system calls of some years ago? As systems rely more on APIs to do a large extent of the system’s work, does testing a heavily API dependant system become just a UX testing exercise? Looking outwards, how do we test the API itself?  How much do we need to test an API in terms of speed, reliability, loading and consistency of response, and how do we do that? How can you test your app’s resilience when an external API changes or is unavailable?
Performance Testing of apps
AQuA has looked at network performance testing, and the AT&T ARO tool can help, but what should be the drive to the overall performance test? Can we set goals and expectations of modern apps? How can we defined them? Do such goals need to be category specific (tools, games, real-time information, trains, buses, etc))? Is user response a higher priority than system accuracy or functionality?
Virtual reality and Augmented reality
Virtual reality is usually self-contained, and can be well defined, but augmented reality interacts with the surrounds.  How can we realistically test augmented reality solutions? Do we need to go and test everywhere, in every location, to avoid a modern Apple maps fiasco?  What measures and bounds can we establish to get a right-sized testing solution?
Connected Cars
The varied of mobile solutions in cars is growing by the day. Vehicle to phone for entertainment, communications and utilities. Vehicle to surrounds for situation information (city traffic lights and roadworks or pot-hole data). Vehicle to vehicle for advanced information (seeing ahead).In such a complex and safety critical world, how do we set the boundaries?  Must we establish a framework to pigeon-hole sub-systems (as with traditional car electronics componentry), or will that inhibit innovation? We have seen individuals ‘hacking’ in to car systems to provide alternative self-driving systems.  How do we test in a scenario where we don’t have any idea what other components may be involved?
Internet of Things
Beacons, local interactive devices, intelligent domestic appliances, smart meters, toasters, kettles and routers are all becoming exposed in a wide network of things.
In testing terms how do we approach this enormous issue?  Is it a case of defining our own system’s boundaries and limits and testing to those?  How do we test for the impact of other things, eith er friendly or hostile? Is security the paramount concern?  How do we manage version testing on things deployed in the field?  What strategies have been deployed and how successful have they been?  What strategies are being considered and yet to be tried and tested?
 
For each of the above issues, please let us know the following:
  • How knowledgeable are you in this area?
  • What issues are you facing in this area that you would like to explore with the group?
  • Are there any wider issues in this area you’d like to explore with the group?
If you don't have a view, then please just put 'n/a'

AI – how do we approach testing AI aspects of apps *

 

API testing *

 

Performance testing of apps *

 

Virtual reality and Augmented reality *

 

Connected cars *

 

Internet of Things *

 

Are there any other topics that you think we should discuss at the Round Table?