Wednesday, November 24, 2010

Basics of Identity and Access Management (IAM)

What is Identity and Access Management?
Identity and Access Management IAM has recently emerged as a critical foundation for
realizing the business benefits in terms of cost savings, management control, operational
efficiency, and, most importantly, business growth for eCommerce. Today, almost all businesses
conduct their commerce through open doors—whether it is through a Web site, allowing business partners to
access the company’s IT resources, or conducting business through a storefront. As Web services becomes
more mainstream, that openness will significantly increase. It is clear that the doors of the enterprise are definitely wide open for business. While this openness provides business opportunities, it also presents security challenges and potential risks.Moreover, they must provide this access for a growing number of
identities, both inside and outside the organization.
It is no longer sufficient to just manage passwords. When trading partners, customers or employees are
allowed broader access to the infrastructure, it is important to carefully identify who the user is, what they need access to, what they have access to, what they can do and what can be done with their information, all while ensuring compliance with corporate policies.

IAM comprises of people, processes and products to manage identities and access to
resources of an enterprise. Additionally, the enterprise shall have to ensure the
correctness of data in order for the IAM Framework to function properly. IAM
components can be classified into 4 major categories: authentication, authorization, user
management and central user repository. The ultimate goal of
IAM Framework is to provide the right people with the right access at the right time.

Authentication
This area is comprised of authentication management and session management.
Authentication is the module through which a user provides sufficient credentials to gain
initial access to an application system or a particular resource. Once a user is
authenticated, a session is created and referred during the interaction between the user
and the application system until the user logs off or the session is terminated by other
means (e.g. timeout). By centrally maintaining the session of a user, the authentication module provides Single Sign-On service so that the user needs not logon again when accesses another application.

Authorization
Authorization is the module that determines whether a user is permitted to access a
particular resource. Authorization is performed by checking the resource access request,
typically in the form of an URL in web-based application, against authorization policies
that are stored in an IAM policy store. Authorization is the core module that implements
role-based access control.

User Management
This area is comprised of user management, password management, role/group
management and user/group provisioning. User management module defines the set of
administrative functions such as identity creation, propagation, and maintenance of user
identity and privileges. One of its components is user life cycle management that enables
an enterprise to manage the lifespan of a user account, from the initial stage of
provisioning to the final stage of de-provisioning.
Self-service is another key concept within user management. Through self-profile
management service an enterprise benefits from timely update and accurate maintenance
of identity data. Another popular self-service function is self-password reset, which
significantly alleviates the help desk workload to handle password reset requests.
User management requires an integrated workflow capability to approve some user
actions such as user account provisioning and de-provisioning.

Central User Repository
Central User Repository stores and delivers identity information to other services, and
provides service to verify credentials submitted from clients. The Central User
Repository presents an aggregate or logical view of identities of an enterprise. Directory
services adopting LDAPv3 standards have become the dominant technology for Central
User Repository.


IAM Life Cycle

Figure: IAM Life Cycle

Figure depicts the identity management lifecycle;
• User Provisioning: The identity management lifecycle begins with the provisioning of the user.
• User Management: Once the user is provisioned, the next phase of identity management is the ongoing
maintenance of the users’ access rights, passwords, and accounts. Applying policy-based
management to the user’s identity can assist in automating the management of access control. For
example, policies can be set up that define the resources, applications and functions that a user in the
accounting department should have access to.
• Policy Management: Policy-based management is the glue that pulls all of this together. It allows
automatic updating of access rights, based on membership in a particular group or department. In
addition, it also ensures that corporate policies are enforced consistently across the enterprise.
• Privacy: In response to privacy regulations, enterprises must secure the privacy of certain types of
information that are related to specific individuals.
• Account Closure: Deleting the account when the identity is no longer needed.

KPI's in WebSphere Business Modeler

Business measures are the modeling elements that extend a process model to
create a business measure model. These include situation events, triggers,
counters, stopwatches, metrics, and KPIs.
A business measure is a variable that describes the behavior of a particular
business action that an employee, a process, or a business unit performs.
Identifying and measuring the right variables are at the core of an effective
measurement system. Managers can use this data to lead their organizations
and make informed decisions.
The development of business measures for a process or processes reflects
management's decisions on the design of monitored dashboards (that reflect the
organization’s goals and objectives), as well as the allocation of resources and
the organization of the company. In addition, the design of the business
measures is affected by the design of the business processes. Then, the
management reaction to business measure (monitored) results will affect, in turn,
the redesign of the processes.
KPI's
A key performance indicator (KPI) is just that—an important indicator of how well
a process or an organization is performing. The most effective KPIs are based on
strategic goals. A strategic goal is an executive statement of direction in support
of a corporate strategy. The strategic goal is a high-level goal that is quantifiable,
measurable, and results-oriented. For business measures modeling, the
strategic goal is translated into a KPI that enables the organization to measure
some aspect of the process against a target that they define. KPIs are defined
within the context of the Business Measures Editor of Modeler and evaluated by WebSphere
Business Monitor, comparing the defined KPI targets against actual results to
determine levels of success.
Figure: WBM Business Measure Details
A KPI is associated with a specific process and is generally represented by a
numeric value, based on one or more metrics. A KPI does have a target and
allowable margins (percentage of target), or lower and upper limits (absolute
values), forming a range of performance that the process should achieve. An
example of a simple KPI is average time for response to a customer inquiry, with
a target of less than two days.
KPIs, as well as metrics and counters, can optionally generate situation events
that can cause business actions. An administrator can use the Action Manager in
WebSphere Business Monitor to specify what happens when the situation event
is received, such as an e-mail notification to the appropriate person.
Best practice: When defining KPIs, be consistent in the use of targets and
margins or limits.
Good KPI's
Every company's center of success lies in its ability to provide better products,
services, or both in the shortest time and for the minimum cost. Appropriate
business measurement makes process improvement not only possible but also
continuous. Employees tend to reduce the complexity of these activities, which
leads to decreasing costs while increasing productivity and flexibility.
Thus, it is important to use business measures effectively to drive performance
improvements. A measurement system only provides you with data. It has value
only if the data can be used to make good business decisions and to drive
improvement efforts that translate into appropriate actions and performance
plans.
The development of appropriate KPIs helps to focus on the runtime management
of the process and also guides the directions for improving (remodeling) the
processes, which is a key benefit of the Business Innovation and Optimization
lifecycle.
Best practice: The following are essential characteristics of effective KPIs:
  1. Represent the essential few: A successful set of measures contains the vital few key measures that are linked to your success. There may be hundreds, or even thousands, of measures in your organization's database, but no individual can focus on more than a few relevant measures.
  2. Combine multiple measures into several overall business measures: a number of organizations struggle with measuring performance by looking at a dozen or so measures. One way of reducing the number of measures is to assign a weight to each measure in a family of measures. You can develop an index, or an aggregate statistic, that represents performance by multiplying each measure by its assigned weight and then adding all such products to arrive at a weighted-average total.
  3. Change your strategy as situations change: Sometimes a company starts collecting data on a specific measure because of a specific problem. Once the problem has been solved or the issues that caused the problem have disappeared, collecting, analyzing, and reporting the measure may be unnecessary.
  4. Quantify how well processes achieve their goals: A business measure is defined as a quantification of how well the activities within a process or the outputs of a process achieve a specified goal. Quantification is an important part of this definition. To measure something, its attributes must be quantified. Measurement requires the act of measuring and should therefore be reliable and repeatable.
KPI should be put into a context of what the process or organization is trying to
accomplish, which is identified by the goals and targets that have been defined.
Therefore, a good KPI should be designed to help determine whether or not a
goal or objective is being satisfied.
If you work with Modeler, you may know ClipsAndTacks sample project, so we can take that sample scenario with Handle Order process;
i.e. management has decided that the Handle Order process has to be updated so that it can fill orders in a
shorter amount of time. Company management wants to establish an automated
process that shortens order turnaround time, especially for trusted repeat
customers. The planned improvements include a new Web-based ordering
system.
ClipsAndTacks high-level business objectives are to attract more customers,
increase revenue, and reduce costs. Specifically, management wants to achieve
the following goals:
  1. Reduce the average time from when orders are received to the time they are shipped to three days. Based on this, a KPI was developed to track the average duration for processing an order, with a target of less than three days.
  2. Achieve an order approval rate of 90% or better. Based on this, a KPI was developed to track the percentage of approved orders, with a target of 90%.

Reference:
IBM Redbooks;Best Practices for Using WebSphere Business Modeler and Monitor

Tuesday, November 23, 2010

Interesting Business Analyst Interview Questions

I have collect some interesting BA job interview questions, mostly non-technical, which can occurr in any IT related job role, so read them carefully.

“What are the other companies you are interested in?”
“What do you do if another employee yells at you”
“what was your proudest moment”
“why do you want to work for us”
“Describe how will you deal with unethical issue at work?”
“Describe a situation when you incorporated another person in making a key decision?”
“Explain your most notable achievment.”
“What do you know about this job?”
“How did you evaluate pros and cons when making an difficult decision?”
“What was the happiest time of your life?”
“Describe a time when you received negative feedback.”
“Explain how you don't like to fail, give me an example where you came near, but bounced back.”
“Describe a situation in which you took a risk, did it pay off, and would you do it differently next time.”
“how do other people view you?”
“Why would you want to work for a company after running your own company for several years?”
“Describe a time when you had disagreements with people.”
“Describe a situation where you had to overcome difficulties with a coworker.”
“how would you start if I was to ask you to design a TV.”
“Tell me about yourself”
“What is the best example of a process you have made more efficient in your life? How did you do it?”
“Describe do you usually communicate with others?”
“Describe a work related problem you had to face recently. What did you do to deal with it”
“If you had to describe a professional short-coming, what would it be?”
“What is your weakness?”
“Have you had any negative interactions with a co-worker in the past and how did you handle it?”
“Tell me how your past work experience relates to your current job/or for this position?”
“What do you know about Us?”
“What was your least favorite class?”
“Name a situation in which you worked with a group.”
“What are your strengths? Weaknesses?”
“What salary are you seeking?”
“describe a time you need to make a quick decision”
“If I spoke with some of your previous employees, how would they describe you?”
“Describe your biggest challenge in your work thus far.”
“Leadership skill, teamwork skills”
“Out of the many standards and certifications existing in this specific field, which one do you believe is the more appropriate and one you follow?”
“Name a time that you used logic to solve a problem.”
“The hardest project you undertook.”
Tell us about a time when you had conflict with your boss. How did you resolved that?”
“What was the hardest challenge you've had over your school and work life?”
“Where do see yourself 5 years from now?”
“Who are the stakeholders? Can you think of any more? Is that all?”
“What Interests you about this role?”
“Tell me about yourself?”
“what are your development areas, both professionally and personally as relating to work?”
“What special attributes or skills make you more qualified for this position than another applicant?”
“What did yo do when, after completing a significant amount of work, but finding out things had changed, and you had to completely re-do your work.”
“what makes you look for another job?”
“How do you handle an unreasonable or extremely unhappy person?”
“What would you say your strengths are?”
“In your one of your projects, which one do you think can be of benefit to our company and how?”

Document Management Concepts

Most of BA's have worked with some type of document management systems, or have basic knowledge about the subject. For those who didn't, this is good place to see basic concepts.
Document  management has been named differently over time;
  1. DMS (Document Management Systems),
  2. DIS(Document Information Systems),
  3. EDM(Electronic Document Management),
  4. ECM(Enterprise Content Management),
  5. Content Management and Knowledge Management.
It's all about the same concept- a set of processes & technologies supporting the evolutionary life cycle of digital information. It allow you to create a document or capture a hard copy in electronic form, store, edit, print, process & otherwise manage documents in image, video & audio, as well as in text form.
Today we know that;
  • 85% of information is unstructured, outside a database
  • 30% of people’s time is spent searching for information
  • 60%-80% of workers can’t find information they need
  • Over 80% of enterprise information is unstructured (checks, PDFs, contracts...)
So, we must manage business documents- results from most business processes.  They can be made of multiple media.
If you have a process for creating, reviewing, approving documents, you need workflow (i.e. workflow of contract approval).
An electronic document has the following characteristics (Sprague):
  1. holds information of multiple media: text, graphics, audio, video
  2. contains multiple structures: headers, footers, TOC, sections, paragraphs, tables
  3. is dynamic: can be updated on the fly
  4. may depend on other documents
There is difference between Document Management System and Electronic Records Management System (ERMS). ERMS is a system used by an organization to manage its records from creation to final disposition. The system’s primary management functions are categorizing and locating records and identifying records that are due for disposition. The Electronic Records Management System also stores, retrieves, and disposes of the electronic records that are stored in its repository.
The Electronic Records Management System may contain a content management and document management component to its system.
There are seven basic components of DMS:
  1. Capture of documents for bringing them into the system,  
  2. Storing and archiving methods (set retention periods for documents, and schedule archival or removal processes.)
  3. Indexing and retrieving tools for document search (find documents and files in seconds)
  4. Distribution for exporting documents from the systems
  5. Security to protect documents from authorized access
  6. Audit trails (Verify who viewed and made updates to documents.)
  7. Version control gives you the ability to manage document changes and revisions-including going back to a previous version of a document.
Today, DMS become ECM (Enterprise Content Management). Wikipedia says: ECM is an umbrella term covering document management, web content management, search , collaboration, records management, digital asset management (DAM), work-flow management, capture and scanning.
ECM is primarily aimed at managing the life-cycle of information from initial publication or creation all the way through archival and eventually disposal. The benefits to an organization include improved efficiency, better control, and reduced costs. For example, many banks have converted to storing copies of old checks within ECM systems versus the older method of keeping physical checks in massive paper warehouses. Under the old system a customer request for a copy of a check might take weeks, as the bank employees had to contact the warehouse to have someone locate the right box, file and check, pull the check, make a copy and then mail it to the bank who would eventually mail it to the customer. With an ECM system in place, the bank employee simply searches the system for the customer’s account number and the number of the requested check. When the image of the check appears on screen, they are able to immediately mail it to the customer—usually while the customer is still on the phone.

Figure: Enterprise Content Management Overview
Source: IBM


References:
Wikipedia
IBM

Sunday, November 21, 2010

EPC Diagrams Overview

Event Process Chain modeling technique is available in the IDS/Scheer ARIS Toolset. Process chains describe the sequencing and interaction between data, process steps, IT systems, organizational structure and products. An EPC always starts and ends with events, which define the state or condition under which a process starts and the state under which it ends.
A function /activity is a technical task, a procedure, and/or an activity performed on an object to support one or more company goals (i.e. Manufacturing).
Figure 1: ARIS  symbols
Events act as triggers for activities, but are also based on preceding functions and therefore describe an event. EPC Diagrams follow an event-function (activity)-event structure, they must begin and end with events.

Figure 2: event-activity-event
Logical branches in the chronological flow of the process are represented by rules in the form of logical operators (AND, OR, XOR). Branching is done with three types of connectors:
  1. AND
  2. OR
  3. XOR (exclusive OR)
There are some rules about using each type of connector, you can find them in ARIS documentation. When you need to model the processes with additional information, you can assign organizational unit/role/person/location to activity, to illustrate who is performing an activity. A process generates data, or requires data to be able to continue, so you can use Database symbol for data flow in the process. Some activities are performed in external IT systems (i.e. CRM), so drag&drop IT system symbol and associate it with such activity. If there is activies with critical effects on process, assign the Risk symbol to them, so you can define countermeasures.

Figure 3: additional symbols in EPC

EPCs are typically used at the lower levels of the process hierarchy. If technical and business processes need to be described, other methods, such as BPMN or UML are used instead of EPCs.


Saturday, November 20, 2010

Serena Prototype Composer 2009 R1 Review

Prototype Composer is a requirements visualisation and prototyping tool designed to simulate how applications will look and function before a developer writes any code, Serena said. With the Prototype Composer product, Serena is attempting to solve the problem of business users not always describing everything they want in an application, or being cryptic about it. Application prototyping makes it possible for project stakeholders to see how an application will look and
function before any code is ever written. With Serena Prototype Composer business users and analysts can
visually capture requirements as wireframes and enable end-users interact with the prototype as if it were the final application confirming requirements and suggesting changes in real time.
Best of all- it's completely free. Prototype Composer is designed for:
  • Business Analysts
  • Business Consultants
  • Project Managers
  • Product Managers
  • User Experience Developers
  • Product architects
The Prototype Composer user interface consists of four main regions:
  1. the editor window that you use to view and edit models of processes, activities, interfaces, actions, decisions and activity data,
  2. the navigation pane to select model items to edit,
  3. menus and toolbars to provide access to modeling commands and options
  4. task panes to provide tools and palettes to support your modeling activities.
Figure 1: user interface regions

Prototype Composer breaks the job of modeling application projects into a number of different tasks, each of which requires a different editor. So, Prototype Composer contains seven high level editors that are accessible through the navigation pane for editing the various aspects of a project:
Project - to collect general project information, to manage requirements and to create and publish documentation related to the project.
Process -to graphically define the process flows that make up the project. A process defines the relationship between business activities performed by different audiences to accomplish a business goal.
Activity- to graphically define the interactive, system or manual activity flows that comprise the business processes. An activity consists of a flow diagram that combines interface, decision, and action steps to describe the logic and functional behavior of an individual interaction or system task. After you have defined an activity, you can use Prototype Composer to simulate running the activity flow.
Interface -to graphically define user interface pages. An interface consists of a page containing a variety of visual elements such as images, buttons, text editors, and other controls.
Decision-to create steps that embody business rules. The process and activity editors use decision steps to make branching decisions that determine which steps are executed next in a flow.
Action- to define steps that perform calculate, communicate, and connect actions. For example, action steps in an activity may represent sending email, performing calculations, or connecting to systems-of-record.
Data- to view and edit the inputs, outputs and data underlying a selected activity.
The simulation feature lets you view each interface step, enter data and make decisions as you traverse the activity. Links in the underlying activity map are highlighted to show your progress. You can step forward and backward through the activity, set breakpoints, and watch decisions being made.
Simulation is not just a slide show of interface steps. Information you enter in interface steps is stored in the underlying activity data, used by decisions and displayed in downstream steps that map to those values. You can specify the data written by action steps, either individually by selecting a test set while simulating, or throughout the activity flow by specifying a scenario.
Serena come with excellent sample project called Qlarius Insurance Quotes, and if you take few hours to study it, the whole concept of Serena become easy to understand. In the next posts, I'm going to show how to create project in Prototype Composer.


Figure 2: running simulation





    Friday, November 19, 2010

    Importing from Microsoft Visio to IBM WebSphere Business Modeler 6.0.2


    Sometimes, we draw (or someone else do that) business process in Microsoft Visio and later, we want to make them available in WebSphere Business Modeler. Or, we already have business process in Microsoft Visio and want to create simulations in WebSphere Business Modeler. So, we need to import Visio files to WebSphere Business Modeler using Import feature of WebSphere Business Modele. However, there are some limitations, WebSphere Business Modeler is a modeling tool for creating valid processes, while you can draw anything in Visio without any constraints. And belive me, if you want to create simulations, you will spend some time fixing errors in WebSphere Business Modeler.
    Let start with import by creating new project in  WebSphere Business Modeler where you will import your Visio file.
    There is sample Visio file to import in your installation directory, so navigate to samples\import\Visio Sample1 in step 4, in order to try mappings from Visio to WebSphere Business Modeler.

    If you are importing a Visio process that contains multiple pages which are connected by off-page reference elements, you only need to select one page of the process. The result is that all pages will be imported. If you select multiple connected pages for importing, WebSphere Business Modeler will create duplicate processes, one for each page that you select.
    The following restrictions apply to importing Visio shapes:
    1. Connections in Visio that are not attached to connection points on Visio shapes are not mapped to WebSphere Business Modeler. Ensure that you use the "Glue to Connection Point" option in Visio.
    2. The Visio import always creates a single process without subprocesses.
    3. In WebSphere Business Modeler, each node must have a unique name. If one or more shapes have the same name in Visio, the names will be differentiated by the addition of a number.
    To import shapes from Visio files, complete the following steps:
    1. In Visio, select File > Save As > XML Drawing (*.vdx).
    2. In the Project Tree view of WebSphere Business Modeler, right-click your project and select Import. The Import wizard appears.
    3. Select Microsoft Visio (.vdx) and click Next.
    4. Click Browse to select the source directory that contains the VDX files you want to import.
    5. In the Files list, select the file.
    6. In the Target project field, select an existing project from the drop-down list or click New to create a new project.
    7. Click Next. Select the Visio pages that you want to import, or click Select All to add all pages. Click Add.
    8. When you have finished specifying pages, click Next and specify the mappings as follows:
      1. Any Visio shapes that you are importing that are not yet mapped are shown in the upper list. You can select each of them in turn (or select several by holding down the Ctrl key) and select the WebSphere Business Modeler element to which to map them. Click Add to add each mapping. 
      2. The current mappings are shown in the lower list. If you wish, you can click Save As to store these mappings in an XML file. 
      3. If you have previously saved mappings or created your own XML file, you can load the file by clicking Load. When you load a mapping XML file, the current mappings remain and the new ones are added. If there is a conflict, the new mappings replace the older ones.
      Any shape that you do not map is mapped to a task. You can click Clear to clear the list of current mappings, or Default to restore the mappings that WebSphere Business Modeler provides by default.
    9. If you have functional band groups in the Visio file, click Next to get to the swimlane selection screen. Any functional band groups that you are importing are shown in the upper list. You can select each of them in turn (or select several by holding down the Ctrl key) and select the swimlane type to which they correspond in WebSphere Business Modeler. Click Add to add each mapping. If you do not select a swimlane type, functional bands are mapped to organization units.
    10. When you have finished specifying your import options, click Finish. A window opens when the import process is complete.
    11. If there were any errors or warnings during the import process, click Details to read them. Otherwise, click Ok
    Default mapping from Visio to WebSphere Business Modeler is logical, but remember
    1. Visio shape Page become WBM Process, 
    2. Visio process, activity, flowchart, procedure, node become WBM Task, 
    3. Visio Disk Storage, Database, Direct Data become WBM Local repository
    4.  Visio Document, Data, Internal StorageStored Data, Message from/to user become WBM Business Item

    Source: IBM Websphere Business Modeler Advanced Version 6.0.2 Help

    Introduction to Cloud computing

    When I first time read about cloud computing, my first thought was- it is the same as Software-as-a-Service (SaaS).  Well, there is a difference- cloud delivers computing as a utility, SaaS delivers an application (such as HRM) as a utility. Cloud computing is a natural evolution of the widespread adoption of virtualization, Service-oriented architecture and utility computing.
    Cloud computing is cost-effective. Here, cost is greatly reduced as initial expense and recurring expenses are much lower than traditional computing. Maintenance cost is reduced as a third party maintains everything from running the cloud to storing data.The service is fully managed by the provider. Users can consume services at a rate that is set by their particular needs. This ondemand service can be provided at any time.
    A good service provider is the key to good service. So, it is imperative to select the right service provider. Provider must be reliable, well-reputed for their customer service and should have a proven track record in IT- related ventures.
    But, there are significant security concerns that need to be addressed when considering moving critical applications and sensitive data to public and shared cloud environments.




    
    What is cloud computing?
    Some will suggest that cloud computing is simply another name for the Software as a Service (SaaS) model. Others say that cloud computing is marketing hype that puts a new face on old technology, such as utility computing, virtualization, or grid computing.
    For the purpose of this article, consider that cloud computing is an all-inclusive solution in which all IT resources (hardware, software, networking, storage, and so on) are provided rapidly to users as demand dictates. The resources, or services, that are delivered are governable to ensure things like high availability, security, and quality.
    In short, cloud computing solutions enable IT to be delivered as a service.


    Why cloud computing?
    First of all, cloud computing can cut costs associated with delivering IT services. You can reduce costs by obtaining resources only when you need them and paying only for what you use. Finally, cloud computing models provide for business agility. Since the entire IT infrastructure can scale up or down to meet demand, businesses can more easily meet the needs of rapidly changing markets to ensure they are always on the leading edge for their consumers.
    In many ways, cloud computing is the realization of combining many existing technologies (SOA, virtualization, autonomic computing) with new ideas to create a complete IT solution.
    Anatomy of a cloud
    With what is hopefully is an acceptable definition of cloud computing behind us, let's take a look at the layers of the cloud. Figure 1 is a distillation of what most agree are the three principle components of a cloud model. This figure accurately reflects the proportions of IT mass as it relates to cost, physical space requirements, maintenance, administration, management oversight, and obsolescence. Further, these layers not only represent a cloud anatomy, but they represent IT anatomy in general.


    
    Figure 1: anatomy of a cloud
    
    The layers that make up a cloud include:•Application services
    This layer is perhaps most familiar to everyday Web users. The application services layer hosts applications that fit the SaaS model. These are applications that run in a cloud and are provided on demand as services to users. Sometimes the services are free and providers generate revenue from things like Web ads, and other times application providers generate revenue directly from the usage of the service. Example,  if you checked your mail using GMail or Yahoo Mail, or kept up with appointments using Google Calendar, then you are familiar with the top layer of the cloud.
    Perhaps not quite as apparent to the public at large is that there are many applications in the application services layer that are directed to the enterprise community. There are hosted software offerings available that handle payroll processing, human resource management, collaboration, customer relationship management, business partner relationship management, and more. Popular examples of these offerings include Unyte, Salesforce.com, Sugar CRM, and WebEx.
    In both cases, applications delivered via the SaaS model benefit consumers by relieving them from installing and maintaining the software, and they can be used through licensing models that support pay for use concepts.
    Platform services
    This is the layer in which we see application infrastructure emerge as a set of services. This includes but is not limited to middleware as a service, messaging as a service, integration as a service, information as a service, connectivity as a service, and so on. The services here are intended to support applications. These applications might be running in the cloud, and they might be running in a more traditional enterprise data center. In order to achieve the scalability required within a cloud, the different services offered here are often virtualized. Examples of offerings in this part of the cloud include Amazon Web Services, Cast Iron, and Google App Engine. Platform services enable consumers to be sure that their applications are equipped to meet the needs of users by providing application infrastructure based on demand.
    Infrastructure services
    The bottom layer of the cloud is the infrastructure services layer. Here, we see a set of physical assets such as servers, network devices, and storage disks offered as provisioned services to consumers. The services here support application infrastructure. As with platform services, virtualization is an often used method to provide the on-demand rationing of the resources. Examples of infrastructure services include VMWare, Amazon EC2, Microsoft Azure Platform.
    Security?
    Here are seven of the specific security issues Gartner says customers should raise with vendors before selecting a cloud vendor.
    1. Privileged user access.
    Sensitive data processed outside the enterprise brings with it an inherent level of risk, because outsourced services bypass the "physical, logical and personnel controls" IT shops exert over in-house programs.
    2. Regulatory compliance.
    Customers are ultimately responsible for the security and integrity of their own data, even when it is held by a service provider. Traditional service providers are subjected to external audits and security certifications.
    3. Data location.
    When you use the cloud, you probably won't know exactly where your data is hosted. In fact, you might not even know what country it will be stored in.
    4. Data segregation.
    Data in the cloud is typically in a shared environment alongside data from other customers. Encryption is effective but isn't a cure-all. "Find out what is done to segregate data at rest," Gartner advises.
    5. Recovery.
    Even if you don't know where your data is, a cloud provider should tell you what will happen to your data and service in case of a disaster. Ask your provider if it has "the ability to do a complete restoration, and how long it will take."
    6. Investigative support.
    Investigating inappropriate or illegal activity may be impossible in cloud computing, Gartner warns. "Cloud services are especially difficult to investigate, because logging and data for multiple customers may be co-located and may also be spread across an ever-changing set of hosts and data centers. If you cannot get a contractual commitment to support specific forms of investigation, along with evidence that the vendor has already successfully supported such activities, then your only safe assumption is that investigation and discovery requests will be impossible."
    7. Long-term viability.
    Ideally, your cloud computing provider will never go broke or get acquired and swallowed up by a larger company. Hm, this is weird statement in today environment.

    One of the core aspects to keeping the cloud safe for all users is the adherence to the basic security principles that apply in the non-virtualised world.
    It is imperative that IT staff do the basics:
    1. At minimum, authenticate users with a username and password, along with stronger authentication options depending on the risk level of the services being offered.
    2. Enterprise administration capabilities are required, especially the administration of privileged users for all supported authentication methods.
    3. Self-service password reset functions should be used first to validate identities.
    4. Define and enforce strong password policies.

    References:
    IBM developerworks
    Wikipedia
    

    Thursday, November 18, 2010

    Best practices for portal projects

    Here I want to give some notes about best practices when starting portal project. Of course, every software development project is unique, but here is my thought's about it.

    Team Composition
    1. have a business sponsor!
    2. select best available project manager
    3. involve various disciplines, but keep core team small:
    4. portal architect
    5. developers
    Business Value (must be clearly understood before portal project is started)
    1. Quantative Measures
    2.    Page hits
    3.    Portlet hits
    4. Qualitative Measures
    5.   Surveys
    Gather Requirements
    1. Define target audience
    2. Interviewing the right people
    3. Conduct workshops
    4. Document requirements
    5. Clearly define project scope, objectives and goals!
    Planning
    1. Don´t overkill your project planning with details – it will change
    2. Based on outcome requirements workshop
    3. Involve experienced project staff
    4. Defining tasks/roles
    5. Estimates (always ask for optimistic and pesimitic estimate)
    6. Decompose in manageable pieces
    7. Test functionality and performance at defined milestones
    8. Ensure alignment with expectations
    9. Avoid unnecessary complexity
    10. Validate the Plan

    Design Concepts
    1. Remember- the Simpler the Better
    2. The information being communicated should be the primary focus
    3. Good Looks are Simple
    4. Determine your target audience's screen size
    5. Limit Browser support
    6. Limit use of graphics
    7. Minimize “Wasted Space”
    8. Keep the design consistent across all pages
    9. Keep the layout simple
    10. Avoid placing portlets in rows, use columns only
    11. Restrict the maximum number of portlets on a page (max. 5)
    12. Keep lightweight portlets on pages everybody accesses
    13. Heavier portlets go on pages users select
    14. Carefully manage amount of content and complexity
    15. Involve designers who understand the functional capabilities of the portal to leverage those capabilities instead of working around them
    16. Include bread crumbs!
    17. Avoid more than 3 or 4 navigational levels

    What is Service Integration Maturity Model?

    Well, SIMM stands for Service Integration Maturity Model, and it is a standardized model for organizations to guide their SOA transformation journey. By having a standard maturity model, it becomes possible for the organizations or industry to benchmark their SOA levels, to have a roadmap for transformation to assist their planning and for vendors to offer services and software against these benchmarks. SIMM may also serve as a framework for the transformation process that can be customized to suit the specific needs of organizations and assessments. This process is a simple sequence of steps: configure the assessment framework, determine the initial level of maturity, and determine the target level of maturity and a transformation path from initial to target level.
    The Service Integration Maturity Model (SIMM) helps an organization create a roadmap for the incremental transformation of that organisation towards more mature levels of service integration in order to achieve increasing business benefits associated with higher levels of maturity. SIMM is used to determine which organisational characteristics are desirable in order to attain a new level of maturity. This will determine whether problems occurring at the current level can be solved by evolving to a higher level of service integration maturity.






































    There are seven levels of maturity in SIMM;
    1. Silo (data integration)
    2. Integrated (application integration)
    3. Componentized (functional integration)
    4. Simple services (process integration)
    5. Composite services (supply-chain integration)
    6. Virtualized services ( virtual infrastructure)
    7. Dynamically reconfigurable services (eco-system integration)
    Level One: The organization starts from proprietary and quite ad-hoc integration, rendering the architecture brittle in the face of change.
    Level Two: The organization moves toward some form of EAI (Enterprise Application Integration), albeit with proprietary connections and integration points. The approaches it uses are tailored to use legacy systems and attempt to dissect and re-factor through data integration.
    Level Three: At this level, the organization componentizes and modularizes major or critical parts of its application portfolio. It uses legacy transformation and renovation methods to re-factor legacy J2EE or .NET-based systems with clear component boundaries and scope, exposing functionality in a more modular fashion. The integration between components is through their interfaces and the contracts between them.
    Level Four: The organization embarks on the early phases of SOA by defining and exposing services for consumption internally or externally for business partners -- not quite on a large scale -- but it acts as a service provider, nonetheless.
    Level Five: Now the organization extends its influence into the value chain and into the service eco-system. Services form a contract among suppliers, consumers, and brokers who can build their own eco-system for on-demand interaction.
    Level Six: The organization now creates a virtualized infrastructure to run applications. It achieves this level after decoupling the application, its servcies, components, and flows. Now the infrastructure is more finely tuned, and the notions of the grid and the grid service render it more agile. It externalizes its monitoring, management, and events (common event infrastructure).
    Level Seven: The organization now has a dynamically re-configurable software architecture. It can compose services at run-time using externalized policy descriptions, management, and monitoring.

    source: IBM DeveloperWorks

    Portals- Part 2

    In this part, i'm going to continue with conceptual flow of a typical portlet. There is a specific series of steps that are required as part of the page aggregation process (steps 4 nad 5 from conceptual flow).























    Figure: Page Aggregation Process

    Portals- PART 1

    I found some interesting materials about web portals, so i will take some basic concepts important for BA 's.
    It's important to realize that SOA requires a holistic strategy and approach.
    It can start from the back end through integration and business process modeling and optmization. But a holistic approach extends to, or can start from the front end.
    This is where people experience an SOA in a very practical way.
    The front-end represents a key integration point. So it's imperative to have a vendor that provides a comprehensive end-to-end, back-to-front end, front-to-back end solution.
    A portal is a composite application – one assembled at the front end. It is an aggregation point for services – delivered through portlets.
    The other side to this is that implementing an SOA is not a single product or single initiative  thought. It is a multi-step initiative. One requiring a number of phases and requiring success at each step of the way. That’s why it’s imperative to start with a project that has a high likelihood of success and visibility. That’s where a portal comes in – it’s visible, experiential, a clear example of the benefits of SOA – providing a high likelihood of success with potentially high ROI.



    Portal Principle
    1. Combines portlets (application user interfaces and/or content) together into one unified presentation
    2. Delivers a highly personalized experience, considering role, personal settings, and device settings
    3. Separates site design, site/page assembly/administration, from application design
    4. Provides application integration, collaboration, single sign-on services
























    What is portlet? The term portlet refers to a small reusable program that can be placed on the portal page to perform a specific function, such as retrieve and display information. Portlets are often thought of as small windows or content areas on a Web page. Portlets provide access to applications, Web-based content, and other resources. You can create your own portlets or select ones created by others. Portlets can be web applications, independently developed, placed and deployed.
    A key point here is that any particular portlet is developed, deployed, managed, and displayed independent  of other portlets. Administrators and end users create customized portal pages by choosing and arranging portlets.  These portlets are accessible by any authorized user coming into the portal environment. 
    Conceptual flow of a typical portlet developed with IBM WebSphere Portal technology is shown below:
























    In the next part i will talk about IBM specific page aggregation process and some other imortant things.

    Wednesday, November 17, 2010

    Project milestones

    After you define the objectives, budget, and schedule for each phase and
    prioritize risks, then you can identify project milestones and describe exit
    criteria for each one. A milestone marks an important point in time at
    which you can assess a particular artifact, synchronize activities, or deliver
    a product. Milestones that mark the end of a phase (referred to as major
    milestones, i.e. customer acceptance of product) also serve as important decision-making moments. At these
    milestones, stakeholders assess the results of the preceding phase and
    give formal approval to proceed with the next phase of the project. In
    order to set clear expectations and ensure that each phase is assessed
    objectively, you must specify exit criteria in advance for each milestone.
    If you have done a thorough risk analysis, defining milestone exit criteria
    for the Inception and Elaboration phases will be easy. If you find yourself
    struggling to refine the default criteria described in RUP, you should revisit
    the Prioritize Risks activity.
    Exit criteria for the Construction and Transition phases will be more
    difficult to define at this stage. Usually, the default criteria are a good
    place to start; you will refine these criteria sometime during Inception or
    Elaboration. Formulate milestone exit criteria as closed yes/no questions
    because you will use them to determine whether the project may proceed
    to the next phase.
    For example, we have milestone "Beta release", and  exit criteria could be- is the product sufficiently complete and of sufficient quality to start production acceptance testing? Or, have the end-user and support
    organization been prepared for deployment?Are budget/schedule variations acceptable to the
    stakeholders?
    Be sure not to include the state of a particular artifact or activity in these
    criteria. Simply producing an artifact or performing an activity is not in
    itself valuable to the project. Rather, it is what you achieve by producing
    the artifact or performing the activity that is valuable.
    source: The Rational Edge -- August 2003
    Example of poor milestone exit criteria- have the test cases been executed?
    Better criteria- do stakeholders find the quality of the product acceptable?

    RUP- building an iteration plan

    An Iteration Plan is a fine-grained,time-boxed plan; there is one per
    iteration. As each Iteration Plan
    focuses on only one iteration, it has a
    time span small enough to let the
    planners do a good job of detailing
    tasks with the right level of granularity and allocating them to appropriate
    team members.
    A project usually has two Iteration Plans active at any time:
    - a plan for the current iteration, to track progress for the iteration
    that is underway.
    - a plan for the next iteration, which is built toward the second half of
    the current iteration and is ready at the end of it.

    The development of an Iteration Plan has four steps:

    1. Determine the iteration scope (i.e., what you want to accomplish in the iteration)
    2. Define iteration evaluation criteria (i.e.,specify which artifacts will be worked on)
    3. Define iteration activities (what work needs to be done, and on which artifacts)
    4. Assign responsibilities (allocate resources to execute the
    activities).

    Inception and Elaboration
    At the beginning of a project, especially a green-field project, you will not
    yet have identified design elements specific to the new system on which to
    base your planning the iteration. So instead, use a top-down approach,
    with rough estimates derived from other projects as a basis for your
    planning assumptions; in fact, you probably did just this for the Project
    Plan.
    If, however, you are dealing with an evolution cycle for an existing
    software product, Inception and
    Elaboration are likely to be shorter and you may have fewer risks to mitigate.
    The objectives of the iteration will be determined primarily by risks, which
    will, in turn, determine which use cases, scenarios, algorithms, and so on,
    will be developed during the iteration. The risks will also determine what
    means to use in order to assess whether the risks have been mitigated.

    Construction and Transition
    By the time you have an overall architecture in place, some
    measurements from previous iterations relating to artifacts (lines of code,
    defects, etc.) and process (time to complete certain tasks), hopefully, you
    will also have mitigated most of your risks.
    Now, you can proceed with a bottom-up, artifact-based planning
    approach, using design elements such as class, component, subsystem,
    use case, test case, and so on, as well as any measures from previous
    iterations to estimate the effort.
    The iteration objectives will be determined primarily by completion
    objectives, and achievement of a set level of quality. This also includes
    specific defects to correct, primarily those that prevent the use of major
    functionality or make the system crash; it also includes deferring "nice to
    haves" for future releases.
    Identifying Activities
    There are some activities that need to be run only once per iteration (or
    per phase or even per cycle). Both Plan an Iteration and Lifecycle
    Milestone Review fall into this category.
    But other activities must be instantiated (replicated) for each element, and
    the element is usually the activity's major output. For example: Code a
    Class must be done for each class; Integrate Subsystem must be done for
    each subsystem and Describe Use Case must be done for each use case.
    Consequently, in the very early iterations of a new project, because the
    design elements have not been identified, you will only be able to assign a
    "ballpark" figure to each activity. For example:
    Code (all) classes: 5 person-days
    Or, for a higher-level artifact:
    Develop proof-of-concept prototype: 20 person-days
    In later iterations, when the design elements are identified, activities can
    be associated to these elements with finer estimates. For example:
    Code Car class: 3 person-days
    Describe Customer use case: 2 person-day

    Core process workflows in RUP


    There are nine core process workflows in the Rational Unified Process, which represent a partitioning of all
    workers and activities into logical groupings.
    A workflow is a sequence of activities that produces a result of observable value.
    In UML terms, a workflow can be expressed as a sequence diagram, a collaboration diagram, or an activity diagram.
    The core process workflows are divided into six core “engineering” workflows:

    1. Business modeling workflow
    2. Requirements workflow
    3. Analysis & Design workflow
    4. Implementation workflow
    5. Test workflow
    6. Deployment workflow

    And three core “supporting” workflows are:

    1. Project Management workflow
    2. Configuration and Change Management workflow
    3. Environment workflow


    The actual complete workflow of a project interleaves these nine
    core workflows, and repeats them with various emphasis and intensity at each iteration.

    I'm going to say a few words about first three core workflows.
    Business Modeling
    One of the major problems with most business engineering efforts, is that the software engineering and the business
    engineering community do not communicate properly with each other. This leads to the output from business
    engineering is not being used properly as input to the software development effort, and vice-versa. The Rational
    Unified Process addresses this by providing a common language and process for both communities, as well as
    showing how to create and maintain direct traceability between business and software models.
    In Business Modeling we document business processes using so called business use cases. This assures a common
    understanding among all stakeholders of what business process needs to be supported in the organization. This is
    documented in a business object-model.
    Requirements
    The goal of the Requirements workflow is to describe what the system should do and allows the developers and the
    customer to agree on that description. To achieve this, we elicit, organize, and document required functionality and
    constraints; track and document tradeoffs and decisions.
    A Vision document is created, and stakeholder needs are elicited. Actors are identified, representing the users, and
    any other system that may interact with the system being developed. Use cases are identified, representing the
    behavior of the system. Because use cases are developed according to the actor's needs, the system is more likely to
    be relevant to the users. Each use case is described in detail. The use-case description shows how the system interacts step by step with the
    actors and what the system does. Non-functional requirements are described in Supplementary Specifications.
    The same use-case model is
    used during requirements capture, analysis & design, and test.The following figure shows an example of a use-case model:



    Analysis & Design
    The goal of the Analysis & Design workflow is to show how the system will be realized in the implementation
    phase. You want to build a system that:
    performs—in a specific implementation environment—the tasks and functions specified in the use-case
    descriptions.
    fulfills all its requirements.
    is structured to be robust (easy to change if and when its functional requirements change).
    Analysis & Design results in a design model and optionally an analysis model. The design model serves as an
    abstraction of the source code; that is, the design model acts as a 'blueprint' of how the source code is structured and
    written.
    The design model consists of design classes structured into design packages and design subsystems with welldefined
    interfaces, representing what will become components in the implementation. It also contains descriptions of
    how objects of these design classes collaborate to perform use cases.
    The design activities are centered around the notion of architecture. The production and validation of this
    architecture is the main focus of early design iterations. Architecture is represented by a number of architectural
    views. These views capture the major structural design decisions. In essence, architectural views are abstractions
    or simplifications of the entire design, in which important characteristics are made more visible by leaving details
    aside.

    BA should pay special attention to first two (and maybe sometimes three) engineering workflows, and to second supporting workflow (Configuration and Change Management).

    Rational Unified Process for everyone





















    Figure 1:RUP

    The Rational Unified Process (RUP) is a use-case-driven, architecturecentric,
    iterative, and incremental software development process. Its goal
    is to enable software development teams to produce high-quality software
    that meets user needs,on time and within budget :).
    The Rational Unified Process captures a number of modern software
    engineering best practices in a tangible and practical form. These best
    practices have been identified by the industry as major contributors to the
    success of many software projects. They are:
    1. Develop iteratively
    2. Manage requirements
    3. Model visually
    4. Use component-based architectures
    5. Continuously verify quality
    6. Control changes
    The iteration model graph shown in Figure 1 provides an overview of the
    process, emphasizing its two dimensions: time along the horizontal axis
    and structure along the vertical axis. This two-dimensionality distinguishes
    the Rational Unified Process from many other process models, which use a
    single dimension to illustrate development lifecycle phases.

    The Rational Unified Process's structural dimension describes the software
    development lifecycle in terms of roles, artifacts, activities, and workflows.
    Roles describe a cohesive set of responsibilities that a person may
    play in the development process. Examples of roles are Architect
    and Design Review Board.
    An artifact is a deliverable that is used, produced, or modified
    during the development lifecycle. Examples of artifacts are Use
    cases, Design model, and Risk list.
    An activity is a unit of work, performed by one or more persons
    playing a role, which produces or modifies one or more artifacts.
    Examples of activities include Determine system boundaries and Fix
    a defect.
    A workflow is a collection of functionally related activities and the
    roles and artifacts related to these activities. Within a workflow, we
    can see which roles perform which activities with which artifacts.
    The workflow also outlines the sequence of these activities, so that
    we know when they must occur. Examples of workflows are
    Business Modeling, Test, and Deployment.


    Phases and Iterations
    The dynamic dimension of the Rational Unified Process describes the
    development lifecycle in temporal terms, using phases and iterations. The
    four phases are Inception, Elaboration, Construction, and Transition.
    Inception is the phase in which the scope of the project is
    determined and the business case is defined.
    Elaboration focuses on defining and validating a robust architecture
    to eliminate technical risk and uncertainty.
    Construction is the phase in which the product is built. Functionality
    is incrementally added to the architectural baseline.
    Transition is the phase in which the product is delivered to end
    users.
    Each phase ends with a project milestone; this includes a review and a
    decision as to whether the project can safely proceed to the next phase. A
    phase, therefore, is not a distinct step during development, as is analysis
    or implementation. Rather, a phase describes 1) what the focus of
    activities should be at a specific point during the product lifecycle, and 2)
    the evaluation criteria for determining whether the phase has been
    successfully accomplished or not.
    Phases consist of one or more iterations. The number of iterations is
    determined by how rapidly and accurately the milestone for that phase
    can be reached. For example, if the problem is vague and the
    development team has little domain experience, more iterations will be
    necessary during Inception. This will ensure that the team can reach an
    accurate description of the problem and that they will not encounter
    domain-related surprises later. A project that involves a new, unproven
    technology will have more iterations during Elaboration, to ensure that the
    architecture incorporates the technology in a sufficient and stable manner.
    Every iteration is a miniproject in its own right because it includes
    activities from all the core workflows: Business Modeling; Requirements;
    Analysis and Design; Implementation; Test; and Deployment. The phase
    the iteration belongs to dictates how much emphasis needs to be placed
    on each of these workflows. Having said this, it might be useful to refer
    back to Figure 1.
    That's it for first post, enjoy the work :)