Agile Planning: What? How Do I Know What I Will Be Working On During The Sprint?

I have heard of many groups that will finish Sprint planning by taking all tasks committed to by the team for the sprint, and assigning them out to each team member.  Teams I have been part of have tried this (especially the first few sprints when new to Agile planning) as it felt like the logic last step of planning.  How else is a team member to know what they would be working on over the next two weeks (or whatever sprint timeframe was chosen)?

The opposite would be to not assign any tasks to individuals during the planning meeting, but instead have the team members select tasks as needed.  Having tried it both ways, I would submit the following:

 

Best Practice: Don’t allocate all Sprint backlog tasks to individual team members at end of Sprint planning meeting, but rather have team members pick tasks from the Sprint backlog as they come to the point of needing more work during the Sprint.

This seems counterintuitive to most, myself included.  Questions flood to mind like:

  • How will we be sure that the ‘right’ people do the ‘right’ tasks? 
  • Aren’t there specific tasks that only the ‘experts’ on the team should do? 
  • What about situations where someone has specialized knowledge regarding a specific task or set of tasks? 

Not only questions regarding task allocation, but also about individual commitment and getting the work done come out. Not having a single person responsible for each task brings questions like:

  • Who do we look to for assurance each task will get completed?
  • Doesn’t this open up opportunities for some team members to slack since they aren’t individually responsible for specific tasks?
  • What about those team members that need to feel the time pressure of a number of tasks in their work queue to work at optimal speed?  (Many of us work best ‘last minute’)

Based on my experience, and that of others I have spoken with, many of these questions are more pertaining to the cultural shift from traditional Project Manager led teams, to self organizing Agile teams.  Though you may have the fear that some people might slack, or tasks might not get completed during the sprint because they isn’t a single person responsible for each task, the actual opposite happens. By leaving the tasks in the sprint backlog to be picked from as the need for more work arises, the team collectively takes on the responsibility to complete all tasks, instead of just ‘my’ tasks in the other scenario. This has proven to be a much more efficient way to enable the team to ensure they complete the sprint goals and do it in highest priority order.  Also, the daily standup meetings help ensure all team members are ‘pulling their weight’, as there simply is no way to hide if you are updating your team members each day regarding task execution/completion.

Certainly there are tasks that naturally go to certain team members based on their past experience/etc.  But more often than not, the team members know of these situations too, and therefore it works out just fine and in those rare instances that it doesn’t, the team works out the situation as it’s identified and grows from the experience.

To ensure we consciously search for situations that could get us in trouble task allocation wise, teams I have been on have gotten into the habit of asking at the end of Sprint planning: “Are there any tasks in this Sprint that someone feels they MUST specifically work on?”  Very rarely do we identify such tasks.  When we do, we discuss as a team the reasons and decide if the task should go to that person right there.  Otherwise, once the planning meeting is over, each team member selects a task (from their desk) they can ‘Race to Done’ on, and enlist the help of others on the team if they need such help.

For those people that work best ‘last minute’ and feel the need to have a large task list assigned to them, they can simply shift their focus from an individual work queue to the team queue and race to done.

The team can use the Sprint Burn down chart to measure how well they are doing completing the Sprint tasks/goals and adjust accordingly if the slope of the Burn down is showing Sprint goals won’t be completed.

Use of the Burn down chart enables the team as a whole to measure progress and adjust as whole while the Sprint is progressing.  This in conjunction with the Daily Standups keeping communication open and bringing issues out quickly, really does work well and eliminates the need to assign all Sprint tasks at the beginning of the Sprint.

Next up in this series, the importance of Visual tools for Sprint planning.

Agile Planning: Group Estimate via Planning Poker

pokerThe best way teams I have been on have found to estimate via a group is Planning Poker.

There are many benefits related to planning as a group (see my previous post on this topic for more).  One thing that was difficult to grasp for me at first, is the idea of group estimating.  Have everyone estimate each task? Even tasks they know they aren’t going to be responsible for?  Yes! The collaborative nature of group estimating helps further dig up hidden features/assumptions as well as provides other benefits. 

Teams I have been part of have tried a number of different ways to group estimate like:

  • Group determines (informally) who they think the task is going to be completed by, and defers to that person’s estimates.
  • Group members all write down a hour estimate on a piece of paper, let everyone know what they thought, and then negotiate until some consensus is gained.
  • Play planning poker for estimation.
    All were used to some success level, but the first two generally took much longer to complete the estimation process and also reduced group ownership of all tasks to more individual ownership of certain tasks.  In one instance, the negotiation process for one estimate (not the details about the estimate) took over 30 minutes, and even after that time, we just took the highest estimate to be able to move on.

Looking for a better way, the team discovered Planning Poker.  Here are the high level details:

  • HoldCardUpEach team member has a set of ‘cards’, each with a single number.  We use a variant of the Fibonacci sequence for card values (1,2,3,5,8,13,20,40,100′).
  • A item to be estimated is read to the group.  Any team member who has questions about functionality/etc. are encouraged to ask their question(s). This continues until all questions are answered the best they can be.
  • The group facilitator asks each team member to pick a single card.  (Says: ‘Estimate!’)  The card is not shown to other team members at this point and verbal estimates are avoided to reduce the chance of influencing other team members.
  • All team members estimate (minus the Scrum Master and Product Owner if they are in the room)
  • Once all team members have a estimate card, they are all flipped over so all can see. 
  • When there is a significant variance between estimates the person with the highest estimate and also the lowest estimate are asked to briefly explain why they picked the number they did.  This usually exposes differing assumptions by each team member and allows for some quick discussion on which assumptions are valid.
  • The team is then asked to re-estimate. 
  • This process if repeated until there is group consensus.
  • Side note: This Planning Poker in detail page has a detailed outline of the process if it’s totally new to you.

A few tips/takeaways from my experience to help with playing Planning Poker:

  • It rarely takes more than two or three rounds of estimating to have total group consensus.  Yes… Really! 
  • Invariably when starting group estimating in general, team members will ask if they can ‘abstain’ from estimating some tasks.  We have chosen as a team not to allow this.  All team members need to participate and do the best that they can.  This has helped all team members get better at estimating tasks they usually wouldn’t be asked to participate in as opposed to relying on specific experts and disengaging.
  • If there is a small variance in estimates (3/4 of team picked 5 and 1/4 picked 2 for example), the team will discuss amongst themselves what they think is best and usually pick a number instead of doing another round of estimating.
  • As a team, we have chosen to stick with estimates that correspond to numbers on cards.  This eliminates the tendency to say: “you have 5, I have 2, lets just average them out to 3.5 and use that estimate”. 
  • We eliminated the 1 and 3 cards, as we found the difference between 1,2,3,5 to be too small to be concerned about as a team.  This further encourages ‘bucket picking’ even at the low end of estimating to validate that we are really just sizing activities, not committing to specific time frames. (This decision has been debated a few times since in sprint retrospectives, asking to put the 1 and 3 back, but the team has decided to keep them out for now.)
  • Have a way to limit question/discussion length prior to estimation.  Some times team members can get caught in the details and ramble on for quite some time.  I have seen some groups use a 2 minute timer that any team member can start, to limit the current discussion point of which when the timer is done, a round of estimation is required, keeping the process moving. Point here is: group is not trying to precisely estimate the tasks, but instead they are sizing them and that just needs to be ‘within the ballpark’ as the main value is gained in that short amount of time.
  • Consider a ‘I need a break’ card.  Some Planning Poker decks have a picture of a pie on one card, meaning ‘I need pie!’ When this card is shown, the group takes a mini-break.  Group estimation can be quite taxing, so breaks are important to keep people fresh and avoid the pitfall of ‘lets just get this done’ estimation.
  • Like most things, the first few times your group uses Planning Poker, it will take longer to get consensus, but over a number of sprints, estimation goes much faster as the group gets comfortable with the process in general.

Next up in this series, my take on how to best manage Sprint backlog task allocation to team members.

Agile Planning: My Top Five Tips on Decomposing User Stories Into Tasks

DecomposeDecomposing a User Story involves taking the result your user is looking for (stated as a User Story) and breaking it down into a number of tasks that the team can work on individually.  Here are five tips I have found to be very useful:

1) Decompose User Stories into tasks as a team

Group planning is a cornerstone of Agile development.  Though it may feel inefficient at times, the benefits are well worth it.  See my previous post: Agile Planning: Plan/Estimate As A Group, Really? for more information/details.

2) Attempt to size your tasks to take one team member between 1/2 day to 3-4 days to complete

Motto here equals: Allow a “Race to Done” situation

Creating tasks that are smaller than a handful of hours end up taking to much time administratively to create/track/update/etc.  Tasks that are larger than 3/4 days (some would say that is too big, but teams I have been on have found it to be workable) really should be broken into a couple of tasks if possible.  They just take too much time to be able to race to done effectively. 

3) Create tasks that result in a deliverable unit of work when completed

When decomposing a user story, be sure to break the story down into tasks that can be completed in a small amount of time (Point 2) but don’t focus on the time so much as ensuring you are creating tasks that result in a deliverable unit of work.

Don’t break down a ‘Maintain User’ feature into (like you might have in the past):

  • Build the UI
  • Build the biz logic
  • Build the data tier

Instead create vertical slices of functionality when possible:

  • Implement Add User
  • Implement Edit User
  • Implement Delete User
  • Implement Add/Edit/Delete User Automated UI tests
    When a team member takes on the ‘Implement Add User’ task, it’s a contained unit of work, straight forward to know when completed, and also not dependent on other tasks being completed to be able to be tested (like Build UI/Data tier/Biz logic all are dependent on each other to deliver functionality to the user).

4) Don’t get caught deep diving into the details of each task

ddThis is more difficult in practice than in theory.  Knowing that task estimation is just around the corner for the team, it’s natural for a experience developer to what to define all details possible, down to ‘how many stored procedures are we going to be creating’.  This, in theory at least, helps ensure the estimation process will be more precise.  I don’t buy into this theory, at least not from a time invested by the team to get this extra level of preciseness. Certainly attempt to ask the functional questions when decomposing a User Story, so hidden functional ‘gotcha’s are uncovered, but also realize the team is just defining/sizing effort at this point, not writing up a ‘development specification’.

5) Ensure testing/automation tasks are included

On the teams I have worked with over the past years, we have always had a group of professional Quality Assurance Analysts.  Our Scrum teams are no different so these types of tasks don’t usually get forgotten, but for the many teams that don’t have QA pros integrated into their Agile teams, I can imagine this to be something missed.  A motto of ‘get the functionality to the user as quick as possible’ would seem to lead to that.  Just because the team is Agile, doesn’t mean there isn’t any testing that should go on!  Automating tasks where appropriate is also very important, given the high amount of regression testing that is needed when sprinting via 2-4 week timeframes.

Next up in this series, group estimate via Planning Poker.

Agile Planning: ‘Ideal Day’ – User Story Estimation

USI don’t like Story Point estimating.  There I said it.  I know many have had success with Story Point estimating, and the Scrum guru Mike Cohn advocates it in his books/etc.  I have just found it to be too abstract, and difficult for developers (and myself) to grasp when starting out using Agile techniques.

In my experience, when developers/engineers/etc. are asked to estimate in hours (which is very much the norm in software), they rulersaren’t really thinking in hours.  Truth be told, I don’t think many actually think in hour blocks when estimating, but instead think in terms of ‘days of work’ or partial days of work.  Here’s an example of what a developer is thinking when giving an estimate:  “Hmm.. I think this task should take me about a day, maybe day and half to complete, so lets make it 8*1.5 = 12 hours”  Tell me you don’t do that?  There wasn’t any self talk on thinking part one of the task would take 2 hours, part two, 6 hours, etc. but rather they would ‘chunk’ their time into days.

So in comes Story point estimating.  We don’t want to be estimating User Stories from a calendar time perspective, but instead relatively against each other.  This allows for quick estimates that give size, but not ‘commitment’ to time which is what most developers feel an estimate is. Hour estimates come during User Story decomposition, part of Sprint planning.  Unfortunately, how do you define one Story Point?  What is your logical point of reference?

This is where the ‘Ideal Day’ metric works better for me.  This metric was shared with me by Pete Carroll and is really an abstraction of the number of hours you would normally expect a developer to be productive during a typical day, subtracting time for meetings, bathroom breaks, etc. This will vary from organization to organization, but has a large benefit over Story Point Estimation IMHO.  Mainly it is the default metric the developers are already thinking in as I eluded to above.  There isn’t any translation in their head, no trying to define an ambiguous metric.  Instead it’s more gut feel that is natural to all developers with some experience while still allowing for the relative estimating of User Stories to take place. All Ideal Day estimates should be in round numbers, (ie 1,2,3. not 1.5, 2.34, etc).

Trick here is to realize the Ideal Day metric is still an abstraction of time estimates.  We don’t plan off the Ideal Day on a timeline, but instead use team velocity matched with Ideal Days to equate to time-lining User Stories from a high level.  The Velocity metric will help even out the group estimation variance just like with Story points, but it feels so much more natural to the team.

Next up in this series, decomposing user story tips.

Agile Planning: Plan/Estimate As A Group, Really?

Office lifeSo you decided to go ‘Agile’ with your team?  Maybe you have read a book or two on Scrum, XP, or something similar.  Many of the base ideals of Agile development make intuitive sense, but this idea of “Group Planning/Estimating”…  Really?  Surely that isn’t going to work with ‘my’ team.

I know this is where I was several months ago.  The team I work with has been using a number of Agile concepts for years to get projects done, and doing it quite successfully I might add.  I was reading up on Scrum, and intrigued by its simplicity yet didn’t really get the idea of how important group collaboration is to its success right away.  I had originally balked at the idea of having Daily Standups (after having them for months now, I realize how wrong I was on this), so the idea of having the whole team get into a room for several hours to “plan” (which includes some estimating) every two weeks (our chosen sprint length) just sounded so inefficient.  I mean… isn’t planning/estimating the PM’s job?  Including all those people and taking them away from “development time” to plan?

Well, I am here to say that a transition to group planning can be difficult, but once you work through the growing pains, it’s well worth the effort.  Don’t short-change yourself by going 1/2 way either.  If you are planning for a 2 weeks sprint, and it’s only taking 1/2 hour to complete, you probably aren’t planning as a group, but more just planning individually and meeting for a short time to pick tasks individually for the sprint.  Most people I have spoken with or read online say it should take around 2 hours per week in the sprint to complete the planning processes.  (4 hours for a two week sprint)  This is dependent on the size of the team, complexity of the project, etc… but it’s a good rule of thumb, and one that I have found to be about right given the number of sprints I have been involved with.

There certainly is work that needs to be done prior to the planning meeting, especially by the ScrumMaster and Product Owner, to groom the project backlog as well as ensure the User Stories are in a state ready to be handed to the team.  But the process of taking a feature and decomposing it into development tasks (unless most of the features you are building are trivial in nature), should be done by the team as who knows better than the ones who will be doing the work?  Picking the tasks as a team, golden.  Estimating the work as a team, what better way to ensure all team members have at least a semi-good understanding of the work being committed to? Having a say in this process also breeds ownership by the team and it’s members.  They have some ‘skin in the game’ from the planning stages, and that helps set them up for success to meet the goals of the sprint which is great for all involved.

I highly recommend reading Agile Estimating and Planning by Mike Cohn as his book will help you work through the transition to group planning very well.

My next post will cover the ‘Ideal Day’ metric, and how I have found it helps the team size User Stories (features).

My Top 5 Mobile Connections 2011 Take-A-Ways

As I wait in the airport for my flight to board, I figured I would put together a quick Mobile Connections 2011 ’Top 5 take-a-ways’ post from my perspective. Lots more detail in my previous posts for each session, this is just my ‘mind dump’ without looking back at my individual session notes.

My original “big goals” coming into the conference were to get a feel from the experts on where cross platform mobile development is headed, if there are any tools to build for the four major platforms with one code base and if so… what tools are leading the charge now, and expected to lead in that space going forward.

#1 Take Away: Cross platform development via one code base (including HTML5) is tough at best, crazy to try at worst.   A number of the experts flat out said, if you want a average at best application, go ahead and try to use a cross platform tool.  Average meaning it won’t specifically feel like other apps on each platform… compromises have to be taken because of the lack of support for some features on each platform, as well as the different UI styles.  If you want a decent application, the UI needs to be built with native code.  Plain and simple.  Furthermore, this isn’t changing anytime soon so just get used to it.

#2 Take away: The “cloud” term has different definitions to different people (no surprise there), but the idea of the ‘private cloud’ vs ‘public cloud’ really hit home for me.  Our ability to leverage the cloud technologies without actually having to put our data/etc on some ‘public’ server is attractive… especially for a transition in the short term, as the technologies around public cloud based security/etc mature.  The ability to do hybrid Cloud offerings, having your web servers hosted by a public cloud provider but the data being hosted on company owned cloud technologies sounds great for SaaS providers that have sensitive data they need to protect the best they can, while still trying to allow for max scalability.  Cloud and Mobile really go hand and hand now, if you expect to support a significant number of mobile users anyway.

#3 Take Away: NoSQL solutions are super fast, and scale 1000’s of times better than Disk Based SQL solutions.  If you are going to be supporting numbers of mobile clients in the tens of thousands or more, you need to be utilizing this type of technology.  Redis seems to be mentioned in every conversation regarding this platform type.  (NoSQL at is most basic is a fully in memory key-value data store). Excellent tips were shared in my notes writeup of the Architecting Back End Systems for Mobile session.

#4 Take Away: SaaS companies (and others using web technologies) need to look at the product offerings that BiTKOO has. Their Keystone app is an amazing abstraction of Authentication/Authorization and from a coding perspective, is really Plug and Play.  Also, their SecureWithin application gateway brings about many possibilities regarding accessing corporately stored data on the web securely. More info can be found in my notes writeup regarding the session BiTKOO CEO gave.

#5 Take Away: The speakers at these conventions are top notch from a know how perspective.  The value they provide to the attendee’s in question answering after the sessions alone, is worth the cost/time invested to attend.

I do feel it’s important to throw in this ‘bonus’ takeaway… I will call it 1a as it’s a continuation of the first takeaway:

#1a Take Away: I did attend a workshop yesterday related to the RhoMobile toolset for cross platform development. Though I wasn’t crazy about how the session was conducted, the products they have do look very promising. Using web developer skills (Ruby), the tool supports all the major Mobile OS’s (WP7 and WinCE support too, in about a month) and generates Native code for each platform. It has a number of excellent features of which the one I liked most was it’s support for specific style sheets per OS. So you build your UI using web programming skills and the product styles the UI to look like the ‘normal’ app presentation for the OS. It comes with stock style sheets for each OS and it really does work well. Has support for camera, BlueTooth, etc, as well as a mapping control that makes the use of the OS’s preferred mapping API very nicely. The toolset also has a local data storage tier that takes advantage of SQLLite. If the platform you deploy to doesn’t have SQLLite embedded into it, the tool will deploy a binary representation of it so you can plan on a single local data source across all platforms. This tool has great promise from what I can tell.

My Notes: 3 Screens and the Cloud: How NUI Technologies Play Nice Together

Speaker: Tim Huckaby  Chairman of InterKnowlogy.com 

Uses Awesomium control to embed web content into WPF application for kiosks/etc.

Natural User Interface (NUI) =  touching the screen (or manipulating the screen without touching it)

Tenants

  • The content should define the experience
  •  The “Grandma Huckaby Test”: the ability to effectively use the kiosk without training
  • No one should have to touch the machine to update content (remote deployment while running)
  • Updating content should happen centrally and should have automated delivery
  • Cant go to deep screens wise (maybe 2/3 levels deep at most)

If something is moving (even simple animation) a humans attention is caught.  You are going to look at it.)

Touch Capable Hardware Implementations:

  • Capacitive – Think electric impulse (iPhone and others)
  • Infrared – Expensive ones.  Think laser pointer(s) (best fidelity of touch… costs 10’s of thousands of $$)
  • Resistive – Think push down and drag  (old,  No ‘cool’ devices use this anymore) 

Tip: 98% of time 2 people is all a device like this needs to support, though people think it will need to support more.  The use cases just don’t support the need to support more.

Given a typical user experience < 5 min on a kiosk type device, you need to keep the navigation shallow and intuitive.

.NET 4 has decent support for touch.  Before .NET 4 was very minimal support.

.NET 4 turned touch into a first class citizen for developers.

WPF does support true distributed computing.  (with .NET 4 version)

Convinced we gotta do mobile apps native.  The user experience in particular requires it.

Azure is easy for .NET Dev’s”

Important aspects of storage:

  • Space consumption & Transactional cost
  • Some storage is designed for unlimited storage but you pay per transaction
  • Other storage mechanisms are designed for limited storage but unlimited transactions

Biggest problem with Azure now is: No way to really know how much this thing is going to cost. 

Kinect can authenticate (differentiate between faces… and when voice is there, voices too)

Hooking Kinect into the windows OS sure looks to be a step to having a ‘Minority Report’ type User Interface for computers. 

This session was an excellent way to end the conference for me.  Tim is an excellent speaker and showed some really interesting technologies.  Peaked my interest regarding looking into some possible UI design changes we might be able to make.

My Notes: Building Location Aware Android Apps

Speaker: Wei-Meng Lee

Wei’s talk was very good, but unfortunately for me, it covered many points that I had already been introduced to in the two other Location based sessions I had attended within the past two days. 

Here are the main takeaways I got from the session that were not already covered in the other sessions:

TIP: (most common problems) Requires INTERNET permission in the android Manifest.XML file to get mapping control to work correctly.

Troubleshooting tip: if you don’t have internet access working in the emulator, or the Map API Key is not entered in your code, you won’t be able to see the map.. (95 % of the issues people have are these two)

Confirmed Native support for GeoCoding and reverse GeoCoding by Google Maps too.

Don’t use both location Manager = “GPS” and “Network” at same time… write the code to turn one off back and forth… otherwise your coords will change often.

My Notes: The New Age of Cloud Computing Eliminating the Need to Write Security Code

Speaker: Doron Grinstein (BiTKOO)   (Involved with development of FastPass at Disney)

Cloud computing is not a server with a longer extension cord (co-location of our hardware).

Cloud computing definition (in his mind): Ability of 3rd party to store, process, search, compute, without being able to look at my data even with a court order.  Algorithms  should support this mechanism instead of just ‘trust’

Think: “Google for private data” Don’t have to know the background technologies.. Just ask for data and you get it back. This is what cloud computing is to him.

Real world example: why is there e-commerce?  What enables e-commerce regarding the entry of credit card info?  When you purchase online, you trust the protocol (SSL), don’t have to trust the intermediaries.

XACML helps enable such a situation for application access control.

Keystone is their application access control engine. it “provides fine-grained authorization using the XACML standard”.

Point: Security ‘goop’ of an app on average should take about 30% of the software development effort.  That’s a lot and it gets done over an over again as new software application get developed.  Why role this type of code into every new app, instead use the Keystone product to do it for you.  Just have to setup a metadata db for your elements and security roles/etc, then hook up a authentication adapter based on your existing authentication process, and the tool will take care of the access control.

My comment right from notes while watching demo: Wow this tool is amazing.  In essence allows for a person to setup a data dictionary via the cloud on application security. 

By externalizing authentication and authorization, you are no longer reinventing the world.  Just using this as a tool for authentication/authorization.

Tool enables federation without writing code.

Also showed tool: SecureWithin.  “As secure as your weakest link”

“Traversing the firewall is a job for a 12 year old.  Going to bypass the concrete wall, instead I will go through the window.”  (Windows = endpoints… weakest link most of the time)

Endpoints need to be protected (properly from within)… if your trusting the infrastructure to protect them… your in trouble.. It’s not a matter of if, but when.

All functions available in the GUI are available via WCF Calls too.

Wow.  This companies offerings are ground breaking.  Challenge the norm thinking.  Amazing.

Allow for 6 diff ways to get

1) Installable

2) Hardware appliance

3) VM Appliance (VMWare or Hyper-V)

4) Cloud (EC2, Azure)

5) Hybrid (1-3) + 4

6) Source

Most of the membership enforcement/etc is done via the ISAPI Filter type setup. 

Products used by Disney, Time Warner, Department of Defense, many other large companies/organizations.

Overall, this one hour session challenged many concepts I thought I understood prior to this session.  Authorization/Authentication via a product like Keystone is amazing, and can become a task for the more junior developer (to setup the metadata in the db in essence) as opposed to some of the most experience/important developers on your team, allowing them to focus on other important tasks.

BiTKOO looks to me to be a company to watch, and one I am wanting to talk to others I work with to start the buy-in process so we can possibly look into using such tools in the near future.

My Notes: Securing Innovation | Cloud Connections Keynote

Speaker: Nils Puhlmann, Chief Security officer, Zynga Inc.  Co-founder Cloud Security Alliance

Zynga (maker of FarmVille, CityVille, Mofia Wars, etc) adds as many as 1000 servers a week to keep up with growth.

“We have to accept what we all know to be elemental – that taking a defensive position can, at best, only limit losses.  And we need gains. ” Peter Drucker.

Point of quote is to say we need to shape security as an enabler rather than just thinking about it as a way to be on the defensive.

Top mobile activities in US:

  • – Sent text msgs 68%
  • – Took photo 52.4%
  • – Accessed news and info 39.5%
  • – Used browser 36.4%

Point:Spectrum of usage is going wider

47 apps downloaded per user for iPhone/ITouch.  22 for Android per user.

Internet has changed from Internet of content/search, to a Internet of people interaction.

 

Social networking has surpassed email use now. People used to have Internet access to get to their email account(s), now it’s to get to Facebook or other social networking sites.

Most Security challenges of Social networks are not technical.

Non-technical info:

  •  Obvious productivity impact
  •  Information disclosure
  •  The graying of personal and professional lives
  •  Corporate disclosure
  •  Social engineering made easy
  •  Sharing of passwords/predictable user names

Technical:

  •  Social networking malware
  •  most AV Challenged the web-base malware
  •  Bots
  •  Bandwidth concerns

“AV is dead anyway”… Web Based malware eliminates the effectiveness of the desktop Anti-Virus products.

Top risks of Social Network’s

  •  Unproven identity of profiles and info
  •  Malware targeting social network sites and users
  •  Inadvertent disclosure of private or sensitive info
  •  Social engineering made easy
  •  Complete loss of privacy
  •  Identity theft
  •  Frameworks for app dev and delivery can lead to malware distribution

Maltego.com… shows you info correlation/connection.  Check this out on your name

Touchgraph.com – Google tool that shows social relationships.

 

Key cloud security problems of today (from CSA Top threats research):

  • Trust: lack of provider transparency, impacts governance, risk management, compliance
  • Data: Leakage, Loss of storage in unfriendly geography
  • Insecure Cloud software
  • Malicious use of Cloud services
  • Account/Service Hijacking
  • Malicious insiders
  • Cloud-specific attacks

Only way to drive risk down to a appropriate level is by managing vulnerabilities. 

Now more than ever, it’s important to have experts look at your data/apps/etc regularly.

it’s also important to have separation of duties.  Don’t want one person or the use of that person’s credentials to have too much access to allow for a lack of check pointing before changes to systems/etc get implemented. 

Security as a Service Initiative:

  • Info assurance challenged by disruptive trends (cloud, mobile, social networking, etc)
  • Cloud proves opportunity to rethink security (economics, arch, service delivery models, etc)