Quantcast
Channel: The KnowledgeSmart Blog
Viewing all 112 articles
Browse latest View live

KS summer update

$
0
0


As usual, it has been a busy summer period for the KS team. Our 2014 library updates are in full swing, due to go live later this month. And the next release of the KS software platform is also due to go live shortly.

Here is a brief summary of what's included in this release:

Content management

We made a number of updates to the KS admin dashboard, to make searching & grouping library material more consistent.


You can now search on a range of new fields to manage your question library, including: question name, training tags, question type, skill level, parent module, parent test and task (T) or knowledge (K).

Draft and Published modules can now be searched and grouped:


Draft and Published tests can also now be searched and grouped:


Due to popular request, we added a delete option for child accounts. Be careful when selecting this option, however, as all related user and results data will also be removed!


Another popular request, we added the option to toggle off tests in your library, so they do not appear in your invite list dropdown (both in your dashboard Invites page and in the browser login dropdown menu).


We also added the option to hide the 'Logout' link on the test UI:


We added an extra check box to cc admins on test invite mails.



Results

You can now create a chart view of the questions which cause the 'Request training' button to be selected during live test sessions. This chart represents the most commonly requested training issues from your team.



You can now create a chart view of the questions which cause the 'Skip question' button to be selected during live test sessions. This chart represents the most commonly skipped questions issues from your team.



You can now include results from all linked accounts, when viewing data on the ‘Group Comparisons’ page. This makes results analysis across multi-office firms more meaningful.



You can now include results from all linked accounts, when viewing data on the ‘Global Comparisons’ page:



We added a new csv export for question results data, on the Results > Question Performance page:


This report displays: test name, test date, test score, test time, result ID, score & time values per question, average question score & time per question and training tags for each question.



We added a new tool for viewing 'required' and 'submitted' answer data for each question, on a per test basis.


Test UI

We have made a number of improvements to the code which drives the test UI and user experience during live test sessions.  This update addresses the issue of occasional error messages which display when submitting answers and removes the phantom page 'flicker'.



Plus a number of smaller updates, including:

Increased word count on the test settings welcome message/user instructions field.
Change advisory message on pick list questions.
Increase system timeout from 20 to 40 mins (if no onscreen activity occurs).
Main website refresh, including customer testimonials, product table and promo movie.
Updates to our own KSadmin interface, for analysing results and user data.

Later this month, we will also be adding the following updates:
New benchmark charts dropdown on Results > Global Comparisons page and include 2013 data, plus new test titles.
Module Performance chart page and csv export with per module results data.
Skipped questions search filter & csv export on Results > Data page.

Work is already under way on the technical specification for the Fall release, which will include; new question types, survey options, user data capture reporting, plus the latest customer wish list requests.

R

Born Global

$
0
0
This week KS presented at Born Global 2013, a competition for businesses in the West to explain why they have the potential to grow overseas.

Sporting a 'Dragons Den' style format, KS CEO Rory Vance attended the auditions in Bristol, going head to head with 5 local businesses, all from different backgrounds and industry sectors, including finance, fashion, music and technology. Presenting to a panel of business and marketing experts, each business had just 4 minutes to present a case for why they have the greatest potential for export success.

From multiple auditions across the region, 10 firms will be selected for the Grand Final on 09 December.

Last year's winner accompanied UK PM David Cameron on a trade visit to Japan, so the potential rewards are significant.

We'll keep you posted when the results from the regional heats are announced later this month...

R


Top DAUG 2013

$
0
0
KnowledgeSmart is once again partnering with the team at AUGI for this year's annual Top DAUG skills contest at AU2013.

This year's contest is shaping up to be the biggest ever, with 10 tracks planned for AU delegates.

Here's what we hope to include, using 2014 versions of the software:

3ds Max
AutoCAD 2D
AutoCAD Civil 3D
AutoCAD Plant 3D
Inventor
Navisworks
Revit Architecture
Revit MEP
Revit Structure
BIM

As in year's past, contestants will have just 10 minutes to answer a selection of knowledge-based and practical tasks, covering the main features of each application.

The person who posts the highest score, combined with the fastest time, in each track, will take the prize for their chosen topic. Then we analyse the data from all streams, to crown the overall winner of the Top DAUG contest.

Hope to see you there!

R

KS Library Update

$
0
0
It's been a busy few months of library updates, but we've made excellent progress this year, keeping everything up to date and moving forwards.

Advanced material, BIM and company standards appear to be the main themes for content this year.

All of the main Autodesk titles are available in 2012, 2013 and 2014 formats. Here's a summary of live titles:


Our newest titles include: Inventor (parts 1-4), AutoCAD Plant 3D (parts 1-4), Revit Structure advanced and Graphisoft ArchiCAD (parts 1-4).  We've also created a French version of our popular Revit Architecture Fundamentals  test.

Here's what we're working on at the moment:

AutoCAD Civil 3D advanced
Revit MEP advanced
Revit MEP fundamentals (metric data sets)
Autodesk Vault
Bentley AECOsim Building Designer
Synchro
Tekla Structures

If there are any topics you would like to see, in addition to those topics listed here, do please let us have your feedback and we'll add it to our rolling list of library updates.

R

KS Wish List

$
0
0
Here's a sneaky peek at the latest KS customer wish list ideas, which have been included in the next software release. This will hit the live site in the spring.  In no particular order..

Task 1
I’d love to have the ability to store some of my group comparisons to avoid have to re-create them all the time.  I know I can save them as PDF’s, but they’re always changing a bit, and I check them as requests for info, or I want to look at one or more. (NB this relates to the Results > Group Comparisons page).


Task 2
Is it possible to have 3 options for the wording that goes on the email invitations? For example one for staff, one for interviewees, plus one other? Add more than one option for custom text on the Settings > Test Invite page. Maybe add a drop-down with ‘Invite 1, Invite 2, Invite 3’. Selecting the different option in the drop-down causes the different text to display.


Task 3
In the default KS test invite, there’s a “click here” link that points to the video overview of test taking; however, I don’t see any HTML code available to add another link to the default invite. Can another one be added?  Essentially, we’d like to point to our internal information page.


Task 4
Is there any future hope of allowing groups to be shared between Users and Results? i.e. if you create a group or dynamic group on the Users > Users page, is there a way to re-create this group on the Results > Data page (without having to build it again from scratch)?  Maybe ‘import’ popups on the Users and Results > Data pages to achieve this.


Task 5
When an assessment is shown as “In Progress” on the Invites > History page, is there a way to see how much has been done without opening the assessment? i.e. can I roll over the invite and be able to tell how many questions have been answered? Or a percentage complete value?


Task 6
Add an extra setting in ‘Settings’ for customising the test start page.  Allow admins to customise the appearance of the test start page.  Offer the option to enter our own custom text. One of the fields carries the KS default text (from the test intro). Allow admin to choose where this appears.  Another field will be the KS note about tests with a time limit.  Admins also choose where this appears.


Task 7
Is there a way to suppress the usernames from the various graphical reports generated in the Results pages? I’d like to provide these graphs with just the data points themselves.  Make the charts anonymous.


Task 8
Can we add a filter on the Invites > History page, which allows admins to search on the main user list and provide a sub-list of users who have not been sent any invites at all? Add a new item in the filter at the top of the page called, ‘Invite not sent yet’.


Task 9
Add to Dynamic Groups for users/invites/results the ability to filter by status (i.e. Interviewee/Employee/Ex-employee). Add Dynamic Group filtering to Invite History (plus the above new fields).


Task 10
Swap ‘Skip question’ & ‘Request training’ buttons over on the test UI, to help avoid users skipping questions by accident.


Task 11
If the 'Skip question' button is disabled, can you remove the corresponding blue square on the navigator key?


Task 12
Add an advisory which is triggered if a user enters too many answers for pick list question types (not too few). It says: 'This question has a maximum of x correct answer choices'. Otherwise the pick list marking awards zero for too many answer entries, which might be considered a bit harsh.


Task 13
In the admin dashboard, add a new csv export button on the Results > Data page which displays the info from the user data capture pages (at the start and end of a test).  Can we see some new search fields, which allow results to be filtered, based on these user data capture fields? i.e. job title, industry, self rating value, etc.


We are grateful to everyone who provides feedback on the KS user experience. Whether it is an idea for a new admin dashboard feature. Or pointing out occasional bugs and errors which crop up during live testing. Both are hugely valuable to us and much appreciated.  Special thanks to HOK, Cox, Purcell, RTKL, Pennoni and Evolve for the majority of these excellent service improvement suggestions.  We'll keep you updated, as the dev approaches a go-live date.

R

Born Global - update

$
0
0
Following on from our recent post about the Born Global 2013 competition, we just received the results of the regional heats....  and KnowledgeSmart was awarded one of the 10 places up for grabs in the grand final!

10 firms from across the West of England go head to head in Bristol on 09 December, pitching to a panel of business, investment and marketing experts, Dragon's Den style.

We'll have one day to recover, post-AU! :)  Will let you know how it all goes...

R

AUGI Top DAUG 2013 - The Results

$
0
0


This year, KnowledgeSmart and AUGI once again teamed up to provide an interactive skills assessment, across 9 popular Autodesk software tracks: 3ds Max, AutoCAD 2D, AutoCAD Civil 3D, AutoCAD Plant 3D, Inventor, Navisworks, Revit Architecture, Revit MEP and Revit Structure. And this year, we also included the general topic of BIM Skills, to make 10 tracks overall.

Here's a brief history of the TD contest, from AUGI.  And here's a summary of last year's competition.

Once again, we had some nice prizes up for grabs.  Track winners walked away with a $100 USD Amazon voucher (a combination of high scores and fastest times determined the winner of each track), a glass trophy and the overall winner won a free pass to AU2014.


We spent several hours on Monday, setting up 20 networked PC's in the exhibition hall, at the AUGI stand.


 Next, 180 copies of Autodesk software needed checking, then all the KS sample data sets had to be uploaded to each machine and checked again.  Big thanks to Tony Brown for accomplishing the lion's share this mammoth task. (Especially as, when we were half-way through, we realised that our thumb drive was broken and had basically corrupted all the data sets. So we had to start over!).

Anyway, patience is a virtue, as the saying goes, so here's how we looked when we were all nicely set up on Tuesday (with a whole 90 mins to spare before the competition opened!):


The main competition ran over 2 days (1 x 3 hour slot on day one, then 2 x 3 hour slots on day two). Contestants had to answer 10 questions, using the 2014 version of each software title. Each session was limited to just 12 minutes, so people had to work fast! Special thanks to awesome AUGI team members, Kristin, Bob, Michael, Donnia and Richard, for briefing users and making sure everything ran smoothly.

Here are the Top DAUG's in action:


By the end of the contest, we posted 356 results, with 0% failed tests. We should also mention that the web connection provided by the Venetian was super-fast! We measured 150 MB per sec on day one.

Throughout the competition, we posted a rolling list of the top 10 contestants for each track, on the AUGI big screen.

The Results

Congratulations to the following contestants, who won their respective tracks:


And a special mention to the overall winner of AUGI Top DAUG 2013:

Ben Rand

Analysis

So, let's take a detailed look at the results of this year's Top DAUG competition.

Overall

No. of Tests Completed:  356
Overall Average:  48% in 10 mins 49 secs
(NB the average score for 2012 was 41%).



Track 1 - 3ds Max

Track Winner: Ted Moore
Winning Score: 60% in 12 mins 0 secs

Top 10 Contestants:


No. Completed: 14
Group Average: 36% in 10 mins 14 secs



Track 2 - AutoCAD 2D

Track Winner: Ben Rand
Winning Score: 100% in 9 mins 40 secs

Top 10 Contestants:


No. Completed: 118
Group Average: 47% in 11 mins 21 secs



Track 3 - AutoCAD Civil 3D

Track Winner: Steve Boon
Winning Score: 90% in 9 mins 30 secs

Top 10 Contestants:


No. Completed: 36
Group Average: 48% in 10 mins 50 secs



Track 4 - AutoCAD Plant 3D

Track Winner: David Wolfe
Winning Score: 88% in 9 mins 30 secs

Top 10 Contestants:



No. Completed: 10
Group Average: 42% in 11 mins 15 secs



Track 5 - BIM Skills

Track Winner: Brian Smith
Winning Score: 75% in 9 mins 55 secs

Top 10 Contestants:



No. Completed: 10
Group Average: 49% in 7 mins 6 secs



Track 6 - Inventor

Track Winner: Ryan Johnson
Winning Score: 95% in 12 mins 0 secs

Top 10 Contestants:


No. Completed: 38
Group Average: 45% in 10 mins 51 secs




Track 7 - Navisworks

Track Winner: Ryan Fintel
Winning Score: 90% in 11 mins 30 secs

Top 10 Contestants:


No. Completed: 11
Group Average: 50% in 8 mins 27 secs



Track 8 - Revit Architecture

Track Winner: Bob Mihelich
Winning Score: 96% in 11 mins 35 secs

Top 10 Contestants:


No. Completed: 84
Group Average: 50% in 11 mins 16 secs



Track 9 - Revit MEP

Track Winner: David Rushforth
Winning Score: 100% in 11 mins 0 secs

Top 10 Contestants:


No. Completed: 18
Group Average: 63% in 10 mins 50 secs



Track 10 - Revit Structure

Track Winner: Kristopher Godfrey
Winning Score: 78% in 5 mins 50 secs

Top 10 Contestants:



No. Completed: 17
Group Average: 51% in 9 mins 34 secs



 Popular Tracks

The most popular tracks, in order of completed tests, were as follows:

AutoCAD 2D - 118 results
Revit Architecture - 84 results
Inventor - 38 results
AutoCAD Civil 3D - 36 results
Revit MEP - 18 results
Revit Structure - 17 results
3ds Max - 14 results
Navisworks - 11 results
AutoCAD Plant 3D - 10 results
BIM Skills - 10 results

Total = 356 results


Range of scores

Interestingly, across the 10 tracks, we saw scores ranging from 0% to 100%. Here is a summary of both ends of the performance scale:

3 x 100% scores (2 x AutoCAD 2D, 1 x Revit MEP).
1 x 0% score (Revit MEP), 16 x < 20% scores (1 x 3ds Max, 1 x Navisworks, 1 x RST, 1 x Plant 3D, 2 x Inventor, 2 x Civil 3D, 3 x AutoCAD 2D, 5 x RAC).

Honourable mention

Special recognition goes to John Fout (@scribldogomega) who placed in the top 10 for 5 tracks. An amazing all-around performance!


Boys vs Girls

Finally, let's take a look at the demographic breakdown of the competition. Out of 356 contestants, 315 were male and 41 female. The average overall performance for each group breaks down like this:

Girls: 48% in 10 mins 58 secs
Boys: 48% in 10 mins 51 secs



So, that's Top DAUG finished for another year. A thoroughly enjoyable 3 days at AU2013. 356 completed tests, across 10 popular tracks. Once again, the overall standard was extremely high, with some outstanding individual performances from our track winners and top 10 contestants.

Congratulations to all our winners. Thanks to the AUGI team for all their support, with a special mention for AUGI President, David Harrington. Lastly, a big thank you to everyone who took part in this year's contest.

See you all in 2014 at Mandalay Bay for more fun & games!

R

Results Analysis - A Detailed Overview

$
0
0


The main point of KnowledgeSmart is to provide AEC firms with detailed skills gap and training needs analysis data.  So let's take a proper look at the process.

To begin, we will assume that you have already gone through the process of adding users, inviting users, assessing your teams and you now have a healthy selection of results in your KS dashboard.

Don't forget, you control the amount of feedback that users get to see at the end of their test session, by selecting your preferred option on the Settings > Test Feedback page.


And you can also choose to display (or hide) the 'Skip Question' (I don't know the answer) button, the 'Request Training' (I'm not sure about my answer) button and the test logout button, via the Settings > Test UI Options page. This will have an impact on the types of results data you are able to capture and analyse later on.



Last, you can choose to display (or hide) the user data capture pages, which appear at the start and end of a test session. We will be adding some new search values shortly, which allow you to create reports on user profiles and demographic data.



You can monitor the status of your invites (Not started, In Progress, or Completed) on the Invites > History page.


For the purposes of this exercise, we're using dummy results data, to protect the (incompetent?) innocent!

When you teams have completed their assessments, you can view your KS results in the Results > Data area of your dashboard.


Use the 'Results per page' drop-down to change the number of results displayed on screen.


Use the account drop-down menu to view results data across linked accounts.


Click on the score link to view a detailed summary of each test report. The report opens up in a new window.


You can display links to in-house or third party learning resources in the test report, which builds a bridge between the testing and training parts of your learning journey.



Creating Groups

You have a range of searching & grouping tools available, for filtering and analysing your KS results in detail.

Step one
Select the 'Show search' link in the orange bar, at the top of the Results page.


There are a variety of different search parameters. You'll also see the five user datafields on the right hand side of the search box (you can add your own custom labels on the Settings > User Datafields page). Use these values to add more depth to your results searching & filtering options.


Step two
Enter the information in the relevant search field and hit 'Search' to filter your results data. Use the check boxes to select your results records for the new group.


Selecting the 'Training requests' filter will display a list of all users who have selected the 'Request training' button during their test session.


Click on each report to see which questions have been flagged.


Step three
Select the 'Show groups' link in the orange bar.


Step four
Use the 'Create New Group' tool to create a new sub-group of your results, based on the output of your search.  Once you have selected your results, type in the name of the new group and hit 'Create'.



Step five
Use the 'View group' drop-down menu to navigate between sub-groups of your results data.



Dynamic Groups

Use the dynamic grouping tool to create new groups which will automatically update, when new test results are added to your dashboard.

Step one
Select the 'Show dynamic groups' link in the orange bar.


Step two
Use the 'New Group' tool to create a new dynamic grouping of your results.  Enter the group name in the field provided.


Step three
Use the 5 datafield drop-downs to create the rules you require, for adding existing and future results to your group.  Save your new dynamic group, by selecting Save Group.
For example, a results group has been created based on datafield 1, City: London.  The next time a new result is added to the database and the associated user record includes datafield 1, City: London, then the results record will automatically be added to the existing group.


Step four
Use the 'View dynamic group' drop-down menu to navigate between sub-groups of your results data.



Exporting Results

You have a variety of options for exporting your KS results data.

Select the 'Export results to csv' button to generate a spreadsheet of overall results data, including a personal curriculum of training topics for each user.



Select the 'Export training info to csv' button to generate a spreadsheet of training workshops, together with a corresponding list of users who have been identified as potentially needing to attend each workshop.



Select the 'Export training requests to csv' button to generate a spreadsheet of users who have flagged questions for further training. The report lists the highlighted questions and corresponding training tags.



Select the 'Export skipped questions to csv' button to generate a spreadsheet of users who hit the 'Skip question' button during their test session. The report lists the skipped questions and corresponding training tags.




Charting & Reporting KS Results

There are a range of charting and management reporting options available in your KS dashboard.

Performance Scatter

This chart displays group results in performance quartiles.  The upper left quadrant (Q1) contains results, where the users completed their assessment accurately and fast.
The upper right quadrant (Q2) shows results which have high accuracy, but slower completion times.  Bottom left (Q3) represents lower scores, but in a fast time.
Lastly, the bottom right quartile (Q4) shows test scores which are inaccurate and slow.



Training Requirements

This chart highlights training topics for a given group of test results. The logic analyzes all of the results for a group, references the training tags assigned to questions presented during a test and lists those tags in priority order.
Red indicates the tasks which have been answered incorrectly by most people in a given group. Orange is the next highest priority, followed by Yellow; green training tags are the topics which have been answered correctly by most of the group, so represent the least urgent issues.
For example; 10 people are presented with one or more questions, which include the tag, ‘Rooms’. If 7 of the 10 people answer one or more ‘Rooms’ questions incorrectly, then the keyword, ‘Rooms’, will be flagged at 70% and appear in Red on the chart.


You can also view charts for Training Requests and Skipped Questions on this page.




Question Performance

This chart looks at how each individual question in your library has performed, in any given test.
The logic analyses all of the results for each test and presents an aggregate percentage score for each question, divided by the total number of results for that test on the account.
For example; 10 people answer a question called ‘Doors’. If 7 of the 10 people answer the question 100% correctly, 1 person scores 50% and the other 2 score 0%, then the question will score an aggregate of 75% and appear in Yellow on the chart.


You can also create a chart for the average time taken to answer each question, by changing the radio button at the foot of the chart.



You can quickly view the relevant question data by clicking on each question name in the 'Question List' box, on the right hand side of this page.


Select the 'Export question performance info to csv' button to generate a spreadsheet which presents a breakdown of per question time & score values for each user.



Select the 'Display question answer info' button to view a table of 'required' vs 'submitted' answer values for each user. This is an easy way to identify an common mistakes or errors across multiple results.



We will shortly be adding a new page called 'Module Performance', which displays a per module breakdown of KS results data.


Group Scores

This chart displays user performance for any given group, in descending order. The X-axis shows the % score attained and the Y-axis displays user names.



Group Comparisons

This chart allows firms to compare performance, from one group to another, across (up to) 9 sets of data at a time.  Group vs group comparisons can be used to compare a range of results data.
For example; pre and post-training performance, different project teams, offices from different geographic locations, data representing different job titles or industry disciplines, in-house data vs interview candidates, and so on.



Global Comparisons

This chart allows firms to compare in-house performance against a wider set of anonymous industry aggregate results data, for selected tests.


We will be adding new searching and reporting options in a future release, so you can analyse the user background info and demographics in greater detail. Regional benchmark stats and user profiles are a key theme for the global comparisons section.

So, there you have it, a comprehensive strategy for capturing, analysing and reporting on skills gap and training needs data for your organisation.

R

Knowledge Trax - A New Approach

$
0
0


This is one of the best approaches to skills assessment and custom training that we have seen.  It's called Knowledge Trax and it combines best practice for assessment, online & class based training and service delivery.

Knowledge Trax breaks this process down into 6 simple steps:

1. Assess
2. Analyze
3. Prescribe
4. Educate
5. Re-assess
6. Refine

Here's a detailed summary of the Knowledge Trax process, to help customers measure and improve performance and productivity.

Discovery
Discovery begins with a kick-off meeting where we help you identify technology, workflow, and organizational structure, along with staff roles and responsibilities.

Next, we’ll conduct a skills assessment, which includes assessment management, building role-based assessment or creating custom assessments based on your needs, and develop custom questions based on your company’s workflow. Our management of this process ensures that all required assessments are conducted and you are able to track who within your organization has and has not taken the assessments.

Planning
In this step, we will analyze your assessment results and build an education plan that combines instructor-led classroom training, instructor-led online training, and pre-recorded self-paced online training. We will then deliver and review this plan with you.

Execution
In the Execution phase, the Knowledge Trax team delivers the training based on the approved education plan. And because these plans are based on the unique needs of your company’s staff and roles, your team may be involved in instructor-led classroom training at one of our facilities or online training.

Maintain
In this phase, we conduct an initial post assessment which will identify areas for ongoing training needs, ongoing refresher training, or gap training. Based on your staff’s individual needs, we’ll deliver the training in the method that best fits them.

Take a look at this video which tells you more about the Knowledge Trax process.

Knowledge Trax partners with KnowledgeSmart and Global eTraining to deliver a complete customer experience.

R

A Decade of Skills Assessment...

$
0
0


At the end of 2013, we celebrated our 10th anniversary, on our journey to develop innovative skills testing solutions for the AEC industry.

Here's a (slightly nostalgic) trip down memory lane, looking back at a decade of skills assessment...

Way back in 2003, we launched a start-up called CADTEST.  Here's our old logo:


This was a plugin software tool, based on the AutoCAD API, which tested basic CAD skills across the following 10 areas (for a simple percentage calculation):
Lines/Sheet Setup & Xrefs/Circles/Text/Blocks/Dimensions/Integration/Layers/Variables/UCS.


We continued developing this application throughout 2004, but eventually realised that we would need more detailed results analysis, if the concept of skills testing was going to go any further.

We also experienced considerable resistance to the 'test' part of the brand, so a new name was launched in 2005 - CADsmart.  We re-wrote the software from the ground up and launched a new application, which offered greater functionality for capturing and analysing results data.


We swiftly progressed our technical offering by adding MicroStation skills testing, plugging into the Bentley Systems API - another industry first.


One of the most important decisions we made early on, was agreeing to focus solely on skills testing, not training, hiring or consultancy services.  And we took an independent approach to recommending training. 

By 2006, we were doubling new sales each year and carrying over 90% renewals.  A brand refresh followed, with a new logo and marketing campaign.



We developed a more sophisticated online admin dashboard, with charting tools, screen recording, 100% movies and detailed user feedback.




In 2007 we introduced performance quartiles, more detailed benchmarking statistics, an 'Xpress' test for interviews, a candidate movie and a web based booking system.  We also ran our first 'CAD Focus Group' meeting, a popular forum for gathering user feedback and software wishlist ideas, which continues to the present day.




By 2008, the world was feeling the effects of the coming 'credit crunch' (read: recession) and the AEC industry was starting to embrace the concept of BIM with greater enthusiasm.  We opened our US office, in sunny Clearwater, Florida.  And we hosted our 'Cutting Edge CAD Management' conference in London.


In the same year, we launched our first interactive Revit Architecture skills test and hosted our first AUGI Top DAUG skills contest at AU, in partnership with 4D Technologies.  We also presented at the Bentley conference in Philadelphia and attended our first RTC conference in Melbourne, Australia.


By 2009, the global recession was taking hold and AEC firms were turning away from hiring and training in large numbers.  This was probably our most challenging year to date.  But it also proved to be a turning point on our journey. With BIM adoption gathering pace, we realised that the plug-in technology we had developed was in danger of becoming obsolete. 'The Cloud' beckoned, and with it the ability for firms to customise their test experience and cover a much broader range of test topics. Our original approach of, 'any colour you want (so long as it's black!)' was not going to allow us to grow much further.

As is often the case with technology, our original ideas were being overtaken by the growth of internet technology and the broad range of software being deployed across the AEC industry. One door was closing, but another one was opening... and the KnowledgeSmart brand was born.


Once again, we found ourselves developing new technology from scratch, but we had 6 years of invaluable ideas to build into our new platform. Plus a loyal customer base and extensive network of authors.

In early 2010, the KnowledgeSmart web based test engine was launched.  And in the summer of 2010 the original CADsmart software was finally retired from service.

Fast forward 4 years and we arrive at the present day, with a fully developed, interactive, web-based test engine, complete with a customisable library of 60+ software assessments (covering the most popular software products from Autodesk, Bentley, Graphisoft and Adobe) and a customer base numbering in the hundreds of firms, right across the globe.



And the funny thing is, in spite of everything we've built over the past decade, all the presentations, meetings, overseas trips, conferences, competitions, highs and lows, in many ways, we're only just getting started on our journey to create a world class skills assessment system.

We can't wait to see what the next 10 years will bring!

R

KS Devs Update - Spring 2014

$
0
0
Q1 is almost done and we've had a busy start to 2014. In fact, if a skills assessment service is a good bell-weather for the general well-being of wider industry, then this is our best start to the year since before the recession began. That's encouraging!

So here's a look at what we've been working on in the past few months:

We started off with a general update to our underlying platform, trading up to .NET 4.5. And we added a new issues-tracking system for logging bugs and system updates.

The Results > Group Comparisons page has a new tool which allows admins to capture and save results comparison charts.


The Results > Data page has new search and export options for capturing 'Skipped' question data:



The Results > Training Requirements page has 2 new tools which display charts for 'Training requests' and 'Skipped' questions:


At the end of a test session, a new advisory note displays, which confirms how many questions have been skipped:


We added some new options for creating dynamic groups on the Users and Results pages:


We added dynamic grouping to the Invites > History page:


Plus the ability to recreate existing dynamic groups in other areas of the system. For example, if you have an existing dynamic group on the Users page, you can now recreate the same group on the Results or Invite History pages. This tool can be found on the Settings > User Datafields page:



Admins can now check the progress of existing 'in progress' tests, by clicking on the new link in the Invites > History page:


We added a new filter on the Invites > History page which allows admins to display a sub-list of users who have not yet received any invites:


We added a new tool which allows admins to anonymise charts on the Results > Performance Spread and Results > Group Scores pages:




The following tasks are currently being tested in our staging area and will shortly be deployed to live:

The test start page has been tidied up and re-formatted.

We fixed a bug which caused tests to display an error if they contained more than 71 questions.

We added the ability to create more than one draft invite template on the Settings > Test Invite page.

We added rich text options, including HTML links, to the Test Invites page.

We added an advisory which is triggered if a user enters too many answers for pick list questions. It says: 'This question has a maximum of x correct answer choices'.

We are adding new options on the Results > Data page to filter and export the user data capture info (which is captured on the pages at the start and end of a test).

We are adding a new page called 'Module Performance', which allows admins to analyse results data on a per module basis.

We are adding new search options on the Accounts > Move Users page.


In addition to the system updates, we are also continuing to work on the KS library. All existing Autodesk titles are available in 2014 format. Plus the following: Vault, Civil 3D advanced, Revit MEP advanced, Revit MEP metric, AECOsim Building Designer, Blue Beam, Synchro and Tekla Structures,

Looking ahead, our summer dev list includes adding new question types and survey options. Followed by individual user dashboards. We'll keep you updated as this work progresses.

R

Structured Knowledge Management Using Skills Assessments - White Paper

$
0
0
The team at A 2 K, led by ANZ Consulting Manager, Sean Twomey, have put together an excellent, thought-provoking paper which examines knowledge assessment and the benefits and process of using skills assessment for benchmarking in the field of design technology.

The paper has a particular focus on the CAD/BIM sides of design and engineering companies, but the process applies to all facets of business. The examples referred to in this paper are based on
Autodesk environments and processes (e.g. CAD and BIM), but are also applicable to Adobe, Graphisoft,
Bentley and similar packages.

In his detailed analysis, Twomey looks at the process of analysing and harnessing assessment data to calibrate goals, to build major improvements and measure progress, with one of the most practical and visible benefits of knowledge management being the ability to facilitate decision-making capabilities. He also considers the challenge of overcoming the innate fear of assessments.

Here's a link to download the white paper.

R

KS Summer Devs Update

$
0
0
As we approach the halfway point of the year, we thought it might be useful to provide a brief update on our current plans, including a look at what's coming up in the Fall release of the KS tools.

KS Software

There are two main themes for our next core system update: new question types and individual user dashboards.

The KS user groups have provided clear feedback that they would like to see our next release focusing on these two key issues. First, providing the means to capture user opinion, as well as test scores, including new test questions (i.e. matching list, fill in the blank, matrix, short essay) and introducing survey options (i.e. likert scale, free text).

Next, a new method of providing KS admins with the ability to more easily chart results on a per-user basis. And allowing users themselves to log in to their own KS dashboard, to monitor their own personal assessment history, charts and benchmark comparisons. This new UI will be mobile-friendly and HTML-5 compatible.

We also recently introduced a new menu option to capture results data on a per module basis.



KS Library

Our library updates are progressing well. All main Autodesk titles will be available in 2013, 2014 and 2015 formats, in the next couple of months. We also plan to update our SketchUp, Adobe and Rhino data sets this year.

Our Revit MEP fundamentals metric data sets will be available in July. And our Revit MEP advanced modules will also be live next month. Advanced Civil 3D, Synchro and MS Excel are all on the 'to do' list in the coming months. Additional AutoCAD 2D  and advanced RAC questions are also in development during the summer months.

We'll keep you posted as these updates progress.  As always, any feedback and suggestions on new tools and library titles are warmly received.

R

Most In Demand AEC Software Titles

$
0
0
We read an interesting piece on the Black Spectacles blog this week, about the primary software skills required by leading design firms in the US.

Data was collected from a survey of 928 job postings at the top 50 architecture firms.

In summary, for software skills, over 70% of architecture jobs require Revit skills and over 50% still require AutoCAD skills.  The #3 software skill required is SketchUp. Hand-sketching was only mentioned in 4% of jobs listed.

In contrast, here are the top 15 most popular skills assessment topics from the KS library. This list is compiled from data gathered from a list of Architecture and Engineering firms in the UK, US, Canada, Aus/NZ, SA and ME regions. The split between A & E firms is more or less equal.

1) Revit Architecture (Autodesk)
2) AutoCAD (Autodesk)
3) Revit Structure (Autodesk)
4) Bentley MicroStation (Bentley Systems)
5) BIM Management (software vendor neutral)
6) AutoCAD Civil 3D (Autodesk)
7) Revit MEP (Autodesk)
8) Navisworks (Autodesk)
9) SketchUp (Trimble)
10) Adobe InDesign (Adobe)
11) Rhino (McNeel)
12) Bentley AECOsim Building Designer (Bentley Systems)
13) 3ds Max (Autodesk)
14) ArchiCAD (Graphisoft)
15) Adobe Photoshop (Adobe)

Not surprisingly, almost half of the titles listed are from the Autodesk stable. Bentley Systems and Adobe have two apiece, with Graphisoft, Trimble and McNeel making up the rest.  Interestingly, general BIM knowledge (no particular vendor) is a top 5 topic.

Here is a summary of the average score & time for the general level assessments, for each title:

1) Revit Architecture - 69% in 63 mins
2) Revit MEP - 65% in 76 mins
3) Adobe InDesign - 64% in 45 mins
4) Revit Structure - 63% in 64 mins
5) Adobe Photoshop - 62% in 48 mins
6) AutoCAD - 62% in 73 mins
7) Bentley MicroStation - 60% in 84 mins
8) Rhino - 57% in 42 mins
9) BIM Management - 56% in 44 mins
10) ArchiCAD - 56% in 51 mins
11) AutoCAD Civil 3D - 56% in 76 mins
12) Navisworks - 54% in 49 mins
12) SketchUp - 54% in 49 mins
14) Bentley AECOsim Building Designer - 54% in 71 mins
15) 3ds Max - 53% in 55 mins

In many cases, the data illustrates a clear need for additional basic skills training, if the most popular software tools used in A & E businesses are to be used productively and efficiently on Construction projects.

R

KS Library update

$
0
0



We've been busy keeping the KS library up to date in recent months.  Here's a brief summary of our current progress.

We help AEC businesses to identify skills gaps and training needs, for existing teams and new hires, across a wide range of design and engineering software products. Here is a list of the current titles in the KS skills assessment library:

Autodesk

3ds Max (parts 1-4)
AutoCAD 2D for occasional users
AutoCAD 2D fundamentals
AutoCAD 2D fundamentals (extra questions)
AutoCAD 2D – Xpress 
AutoCAD Civil 3D (parts 1-4)
AutoCAD Civil 3D – Xpress 
AutoCAD Civil 3D advanced (Roadway Design)
AutoCAD Plant 3D (parts 1-4)
Design Review fundamentals
Inventor (parts 1-4) 
Navisworks Manage 
Revit Architecture for occasional users
Revit Architecture fundamentals
Revit Architecture – Xpress 
Revit Architecture advanced
Revit Content Creation
Revit for Interiors
Revit MEP fundamentals (Mechanical)
Revit MEP fundamentals (Electrical)
Revit MEP fundamentals (Plumbing)
Revit MEP – Xpress 
Revit MEP advanced (Mechanical)
Revit MEP advanced (Plumbing)
Revit Project Process
Revit Structure fundamentals
Revit Structure – Xpress 
Revit Structure advanced 
Vault fundamentals
KS Community – AEC Layer Standards – Autodesk 
KS Community – Revit Architecture (White Frog)
KS Community – Revit Process & Workflow


Bentley Systems

Bentley AECOsim Building Designer (Architecture)
Bentley GEOPAK (Drainage)
Bentley InRoads (Roadway Design)
Bentley MicroStation 2D for occasional users
Bentley MicroStation 2D fundamentals
Bentley MicroStation 2D – Xpress 
Bentley MicroStation 3D fundamentals
Bentley View fundamentals
KS Community – AEC Level Standards – Bentley 


Adobe

Adobe InDesign for occasional users
Adobe Photoshop for occasional users


McNeel

Rhino fundamentals


Trimble

SketchUp fundamentals


Graphisoft

Graphisoft ArchiCAD  (parts 1-4)


Building Information Modelling (BIM)

KS Community – BIM Management
KS Community – USACE BIM Requirements


We are making good progress on our annual update of the Autodesk test titles. All our KS test material will be available in 2015 format by the end of September.

We receive regular requests from our customers and user groups, for new library material.   Here is a list of our current works in progress:

Revit Architecture fundamentals extra questions
Revit Architecture advanced extra questions
Bentley ProjectWise
Bentley AECOsim Building Designer (Structures)
Revit MEP advanced (Electrical)
InfraWorks 360
BIM 360 Glue
BIM 360 Field


Here is a list of  additional titles, which are currently under consideration:

Adobe Illustrator
MS Excel
MS Project
Solibri Model Checker
Synchro
Tekla Structures
Vectorworks
ARE


KS works with a network of independent authors, all of whom are subject matter experts, consultants, trainers and published authors. All of the KS library material is fully customisable. Customers can also write their own questions and tests, to cover in-house standards, processes and workflows.

For a free evaluation of the KS tools, please email support@knowledgesmart.net and we'll be happy to set up a trial account for your company.

R

David Miller Architects Case Study

$
0
0

Architects That Invest To Win, Win More Business!



David Miller Architects (DMA) is a forward-thinking, leading-edge architectural practice that has made significant investments in people, performance and processes since its inception in 2000. The net result is improved client service levels and an unparalleled portfolio of repeat business.

For the first six years David Miller worked alone, but then in 2006 he committed to grow his practice to service larger clients and to take on more challenging projects. With a long-term vision for growth and a well-developed ‘emotional intelligence’ (EQ), Miller embarked upon his search to recruit like-minded architects. “Right from the start I looked for people with a certain temperament,” he explained. “The best practices hire great thinkers, great communicators, and team-players but I wanted more.”

Miller searched specifically for individuals with minds open for learning and an abundance of creative energy for problem solving in response to project requirements. “These people are the real transformers of projects because they build enduring and trusted client relationships.” concluded Miller.

According to author Daniel Goleman, this rare combination of abilities is common in architects with high EQ. For Miller, it is the primary reason for the unusually high volume of repeat business enjoyed by his practice. Today the DMA team is 19-strong, having grown steadily to service its expanding portfolio of commercial, education, and residential projects. Throughout this growth period staff turnover has remained low. “I consider it my job to make everyone on our team feel supported by investing in technology and processes to enable them.” explained Miller.

DMA applied for ISO 9001 accreditation when there were only four people on the team. It quickly became apparent to Miller that when supported by good processes his team would be free to make decisions without deferring to him for approval. This approach nurtured and insulated new recruits in a way that quickly enabled their self-confidence to grow; giving them a greater sense of autonomy and achievement. “The whole practice found ISO 9001 to be very liberating and for the first time we were able to pitch for public sector projects that require ISO 9001 compliance from all consultants,” commented Fiona Clark, Practice Director.

There was also a human benefit, as Clark explained. “Younger people found it easier to increase their contributions to the team leading to increased efficiency and more predictable operations across all projects.” Such positive outcomes quickly led to DMA applying for and securing ‘Investors in People’ recognition; placing them among the top 0.5% of architectural practices in the UK.


Perhaps unsurprisingly then, with an office filled with emotionally intelligent architects and designers with minds open to learning, the adoption of Building Information Modelling (BIM) processes, technologies and collaborative behaviour has also been successful.

The result; DMA is now years ahead of the curve for complying with the Government’s BIM initiative for all centrally procured projects to achieve Level 2 BIM status by 2016. As with other parts of their business, DMA approached the transition to BIM with an appetite for learning and a hunger to understand all aspects of the new collaborative workflows.

“Ordinarily across the practice we measure everything so we can continue to improve,” added Clark. “We therefore committed to measure the cost of our BIM investment and compare it to the value we realised at the practice and project level.” 

As with ISO 9001 and Investors in People, DMA engaged external consultants to help measure their BIM performance. First they looked at DMA’s BIM adoption on a macro level using the American ‘National Institute of Building Sciences – Facilities Information Council National BIM Standards’ (UK standards were still under-development) and after they looked at DMA‘s capabilities on a micro-level by assessing staff skills when using Revit, Autodesk’s BIM software.

To assess staff skills, DMA engaged the KnowledgeSmart team to identify individual Revit skills gaps and to plug those gaps using customised training programs. “We hope to measure significant productivity and efficiency improvements…” explained Miller, “…but that isn’t our primary goal.”

Indeed when each software license costs around £4,000 it makes little sense to use only a tiny fraction of the software. With modular training from White Frog, prescribed in response to an individual’s KnowledgeSmart skills-gap assessment, DMA can be sure that its architects’ design and project development skills are uninhibited by their software knowledge.

Miller continued, “We first want to ensure that our BIM adoption is strong and the best way to raise the bar is to make sure our architects can achieve everything they need to achieve inside the BIM software environment.”



Sadly not all practice leaders believe in the value of training staff. Some even reject the notion of training on the grounds that the newly skilled employee will just up-sticks and leave for more money elsewhere.

Of course, people don’t always change jobs for more money, many move in search of greater job satisfaction. But as Zig Ziglar once said “What’s worse than training your workers and losing them? Not training them and keeping them!” 

It is common for employees to feel under-appreciated and un-supported by employers when they are passed over for training or promotions. As new talent joins a firm, with new skills in hand, the current staff can start to feel a little overshadowed. Causing them to seek out pastures new; this is highly disruptive to active
projects as project-specific knowledge leaves the project along with the staff. According to Ben Franklin, “An investment in education always pays the highest returns.”


Indeed, the longer the construction project the greater the chance of team members leaving. As many construction projects take a long time to complete, and as starting and finishing with the same team in place is a sure-fire way to provide the best client service levels, retaining staff that are well trained and highly knowledgeable is likely to contribute more than anything else towards improved client service levels and project quality.

DMA constantly reinvests for growth. For example, R&D Tax Credits realised from their investment in BIM have been reinvested in its staff to assess and improve their individual knowledge and productivity. And this investment in practice performance is amplified at the project level where DMA clients are also able to measure improved performance.

Miller explained why this matters to DMA “Our clients make no secret of their project and consultant performance measurement processes and we know that we score very well because they continue to give us larger projects.”


As UK projects move towards more process-driven workflows DMA is ideally suited to further expand its portfolio of projects. Miller concluded, “Our continued investment in the practice, our people, and our processes is paying off. Clients have come to know that they will always enjoy a consistent level of output from DMA which greatly reduces their project risk. And I know that our achievement in that regard is a team achievement of which I am very proud.” 




© Copyright 2014 Oundle Group. All Rights Reserved.

Customising Self-Rating Datafields

$
0
0
At the start and end of a KS online skills assessment, admins have the option of capturing additional demographic data, which provides helpful background information for users and places test results in a more relevant context.

This includes details such as industry discipline, job title, geography, training history and core software use.  We also capture a 'self-rating' at the beginning of an assessment. That is, how well a person thinks they are going to perform, before they begin.



We can then chart anticipated performance against actual scores and capture a 'perception gap'.



Here is a customer wish list request we received in May: 

We would like to change the labels of the mini survey at the beginning of an assessment to the following terms if possible:

1 – Basic
2 – Beginner
3 – Intermediate
4 – Advanced
5 – Expert


An excellent suggestion!  So it is now possible to edit the values in the user self-rating title and description fields, so KS admins can add their own custom labels when capturing this data.  The updated fields carry over to the test report, results search table and csv export tools.


In the next release, we'll be adding additional survey options to the KS dashboard, making it possible for KS admins to capture even more detailed information about teams skills and user capability.


R

KS Admin FAQ's

$
0
0
Here are the most common questions and points of discussion for KS admins.

Q - What's the difference between a Sub-admin and a Full-admin?

A - Sub-admins can set up test sessions from a browser, using the 'Assessment' link, but not access the main KS dashboard.  Full-admins can log in to the KS dashboard and use the browser test set-up tools.



Q - How do I re-set or change my KS password?

A - All users can re-set their KS password at any time, by clicking the 'Reset password' link on the KS login page:


This takes you to the following page: https://online.knowledgesmart.net/PassRecover.aspx.  Enter your KS username and the system will email a new password.


KS admins can set their own password, by logging in to their KS dashboard and selecting the 'Change password' link.


Copy the original password into the first field, then confirm your own password and save the changes.



Q - How do I add a new KS administrator to my account?

A - Go to the Accounts > Your Accounts page of the KS dashboard and click on the 'Administrators' tool (small people icon).


From here you can add new Sub-admins or Full-admins, using the 'Add administrator' fields:



Q - I get an error message when I try to import my user list.  Why is this?

A - First, make sure that you are using a .csv file format, not .xls or .xlsx.  (See help notes on the Users > Upload User Data page for a link to a formatted .csv template).  Also, make sure that you have 10 data columns in your .csv file.


Q - My invite mails are not getting through to users every time.  Why not?

A - Sometimes KS invite mails are blocked by spam prevention software.  If the KS mails are not in junk mail, then your corporate security might not be allowing the mails through.  This happens from time to time.  A combination of the following steps usually resolves the issue:

‘White flag’ the domain @knowledgesmart.net.
‘White flag’ the domain @ks-server.net.
‘White flag’ mails from system@ks-server.net.

That should allow the mails to work OK.  However, if they are still getting blocked, there is one more thing to try.  Our hosting company (1&1) sometimes uses a relay server in Germany, called kundenserver.de.

‘White flag’ the domain @kundenserver.de.

That should sort everything out, from a corporate mails point of view.


Q - Some invite mails contain a username and password.  Others do not.  Why is this?

A - The KS system appends a username and password on invite mails for regular users (i.e. non-admins), but not for system administrators. KS admins can manage their own passwords, so the KS system does not include admin usernames and passwords on invite mails.


Q - When  a user tries to log into a test session, their password does not work.  Why is this?

A - If a user receives two or more test invites in close succession, the latter invite(s) will overwrite the password(s) from earlier mails.  Instruct them to use the password from the last email invite received to access ALL open KS test sessions.

All users can re-set their KS password at any time, by clicking the 'Reset password'' link on the KS dashboard login page, which takes you here: https://online.knowledgesmart.net/PassRecover.aspx. Enter your KS username and the system will email a new password.



Q - If  a user logs out of a test session, will all answers be saved?

A - Yes.  All your answers will be saved if you resume a test session, at a later date.


Q - If a user logs out of a test session, does the clock stop?

A - Yes.  The clock stops recording elapsed time when you log out of a test session and starts again when you log back in.


Q - A test result did not upload correctly at the end of a test session.  Can it be rescued?

A - Yes.  If a user loses their web connection at the end of a test session, occasionally a result will not upload to the dashboard successfully.  Just notify us and the KS team will manually retrieve the score and add it to your dashboard.  Submitted answers will not be lost.


Q - Can I mix & match questions from different test modules?

A - Yes. KS admins have full editorial control over all KS test material, including authoring tools for creating new questions from scratch and sorting and moving existing questions between modules and tests.


Q - Can I extend the expiry date on an invite?

A - Yes. Go to Invites > History, select your invite(s) and use the 'Extend invite' tool to change the expiry date of your invites.



Q - Can I set a time limit on a KS test?

A - Yes. Import your KS test and go to Library > Draft Content > Draft Tests, edit step 4 (Test Settings). Enter your chosen time in the 'Time limit' box.



Q - Can I hide the clock which appears on the KS test UI?

A - Yes. Import your KS test and go to Library > Draft Content > Draft Tests, edit step 4 (Test Settings). Uncheck the 'Show timer' box (see above).


Q - How do I copy admins on KS invite, reminder & results mails?

A - Go to Accounts > Administrators and toggle the account invites tool (small envelope icon) to 'on'.


Go to Settings > Invites and check the box below:


Go to Settings > Reminders and check the appropriate boxes:



Q - Is it possible to edit score and time data for KS results?

A - Yes. If you have a KS result that needs to be edited, email the details across to us and we can make the edits via the main KS db.


Q - Can I copy dynamic groups from the Users page to the Results page?

A - Yes. Go to Settings > User Datafields and click on 'Share Dynamic Groups', to share existing groups across the Users, Results and Invite History pages.




Q - Can I filter results by date of test taken?

A - Yes.  You can filter KS results data across a variety of search criteria. Go to Results > Data page and click on the 'Show search' tool to display the search options:



Q - Can I export test results?

A - Yes.  Go to the Results > Data page and click on the 'Show groups' tool.  Look for the 'Export results to csv' tool.  You have a number of export options to choose from:



Q - Can I edit the user rating labels which appear at the start of a test?

A - Yes. Go to Settings > Test UI Options and apply your changes in the fields provided.



Q - Can I choose whether or not to display the user survey pages at the start and end of a test session?

A - Yes. Go to Settings > Test UI Options and apply your change in the relevant check box.



Q - Can I stop users from logging out of a test session?

A - Yes. Go to Settings > Test UI Options (see above) and apply your change using the relevant check box.


Q - How can I compare our results to the overall average KS benchmark stats?

A - Go to Results > Global Comparisons and select the title(s) you want to compare your results data against.



R

KS Tips for Getting Started with Your Assessments

$
0
0
Making a positive start with your KS assessment program is a key part of generating momentum in your wider learning and development plans.  Here is a brief guide which walks you through the right steps to make sure you get off to a flying start!

1.Discuss your goals. What do you want to achieve?  How will you define success?  Do you have management buy-in to support your plan?

2.Select your KS system administrators.  Decide who will have global access and who will have regional access.  Do you want to include sub-admins?

3.Book your KS Getting Started web meeting with the KS team and/or training partner.  Review the information on the Settings > Resources page of your KS dashboard.

4.Set up your KS accounts.  Do you need one main account or multiple linked accounts?  Add your company branding to your dashboard, test UI & reports.

5.Agree your message and decide how to communicate your plans to your team.

6.Identify a list of users to be tested.  Do you have any positive-minded ‘super users’ or champions who can be relied on to assist your plans?

7.What topics will you include in your test program?  Will you use the KS library ‘off the shelf’, or will you make some changes?  Do you want to add any custom questions of your own?

8.Build your user list.  Filter your user list into groups.

9.How much user feedback do you want to provide in your reports?  Include coaching?  Include links to learning?  Include additional user data capture?

10.Create a schedule of invites.  Who, where, when, what?  Do you want to send a custom invite message?  Will you test new hires at interview?  Who will help to administer this?

11.Send your test invites.  Monitor progress on the Invites > History page.  Set up reminders.

12.Monitor overall test progress and results.

13.Schedule results analysis workshop/web meeting with the KS team and chosen training partner.

14.Create reports and feed KS results data into your wider training and learning program. Include your preferred training partner in this phase of activity.

15.Re-test after training to monitor effectiveness of your material and measure performance improvement.


R

New KS Question Types

$
0
0
We have added some new test question types to the KS system this month.  Here's a brief summary of the new choices.

Matching list

This question type asks the user to correctly identify, or match the relationship between 2 lists of data.  Here's an example:


Here's a look at how to build up the answer options, for Matching list questions:



Complete the blank

This question type asks the user to fill in one (or more) blank fields, or select from a list of available options in a sentence or paragraph.  Here's an example:


Here's a look at how to build up the answer options, for Complete the blank questions:



Matrix

This question type asks the user to review a word or short phrase and select a radio button option that relates to the ‘correct’ description.


Here's a look at how to build up the answer options, for Matrix questions:



Essay

This question type asks the user to write a longer answer to a question. This will not be automatically marked and will require manual scoring.



To assign marks to an Essay question, go to Invites > History and click on the 'Pending' link.


Click the 'Edit Score' link and assign your marks. You can also add comments, which will appear in the test report. Save your changes and the final score will appear on the Results > Data page.


The 'Create Question' page now lists all 9 KS test question types:


You'll also see  a brief description for each question type on the test landing page:



R
Viewing all 112 articles
Browse latest View live