All About BGA Rework

BGA Rework Reflow

One of the most critical processes for BGA rework is the process of reflow. The reflow process occurs after the previous device has been removed and the site prepped. The replacement device is replaced using either flux or paste attachment. The reflow process in BGA rework should emulate the manufacturing process as closely as possible. Given the thermal mass of the board in and around the BGA the profile should match that of the replacement solder balls (if the device has been reballed and will be used a the replacement device) or match as closely as possible the profile of the device vendor’s data sheet of the solder paste printed.

There are several “rules of thumb” when zeroing in the process of reflow profile. It is good to learn about the thermal characteristics of the PCB when trying to dial in a reflow process profile. One of the best ways to “learn” about the thermal characteristics of the PCB when there is only one PCB and there is not a profiling board available, is to use what is learned during the removal process to help “dial in” a reflow process. Many times a BGA rework technician will use a standard profile in order to remove the device, tweaking or adjusting the profile based on the results achieved. If there is the availability of a solder sample which allows the BGA technician to embed thermocouples into the solder balls (a corner, and 1 or 2 other places depending on the size of the package), into the die, around the BGA, near other components etc. all in conjunction with learning about the thermal characteristics of the device and board during the reflow process. The proper method for embedding these thermocouples is high-temperature epoxy for their attachment to the device or internal to the PCB. Another “rule of thumb” for the reflow profile is making sure that for lead-free profiles the solder joint, as seen by the temperature in both corner and other balls, is above liquidus for a period of 60-90 seconds. Tin-lead solder should be above liquidus for a period of 30-45 seconds.

Not only should the profile be correct and be confirmed through temperature measurements but the components in and around the BGA being reflowed need to be protected. This is especially tried when using a hot air source and for the device which is heat-sensitive including but not limited to ceramic capacitors, plastic connectors, batteries, and MELFs. In addition devices with underfill or components with TRV or glue around them should be watched and protected as these materials will become soft and potentially run all of the board making a large mess. Protection from the heat sources, especially when using a hot air reflow source comes in many different flavors. One of the most common-used but least-effective types of protection used is the Kapton™ tape found in many areas of the SMT process. This has been shown in several studies on this topic to be LEASE effective type of heat shielding material. Other more effective sources include a water-soluble gel or a ceramic-based nonwoven material. Whatever the type of heat shielding material used to protect neighboring devices during reflow, their use is important to protecting devices from excessive heat which damage the devices.

In order to run a complete profile, the PCB should be adequately supported. This is especially true of there are “imbalanced” copper section of the board or in cases where very thin.032″ thick boards are being reflowed. Without adequate board support, there may be board warping which may damage inner layers or cause the board to be badly deformed making placement of components difficult or have a reliability problem with respect to the solder joints. There are a variety of board support systems on the market with most higher end rework systems offering a flexible board mounting and support system design.

Not only is adequate board support required but proper bottom side heating of the boards will help ensure minimal differences in temperature across the board and a lesser propensity for board warpage. Modern BGA rework systems are equipped with sophisticated bottom side heaters. Advancements for making sure the process of reflow is optimized include multizone bottom side heaters. These heaters allow the user to have the rework area at a higher temperature than the remainder of the PCB thereby reducing the likelihood of board warpage during the process of reflow.

A typical lead-free, hot air source thermal profile is seen below. First, the bottom side heating begins to warm up the board with one temperature (typically 160 or 170C) being in the rework location and another, typically 150 C, being at other areas of the board. During the time this temperature is applied to the bottom side of the board the nozzle temperature begins to climb during the “ramp” period of the reflow profile. Too fast a ramp may damage neighboring components or the laminate. Then a “soak” phase, which lowers the ramp rate and starts to activate the flux, begins. After this phase, the liquidus temperature, somewhere between 205-220C is reached. This begins the reflow zone. In this, the maximum temperature is reached and the rework location “sees” a temperature which is above the liquidus temperature for a minimum of 60 and as long as 90 seconds. The reflow profile ends with a cool down zone. The cool down zone cannot be so extreme so as to cool the board where the negative temperature gradient may cause brittleness in the solder joint at the end of the process of reflow.

How To Choose IT Company

When the right questions are asked as you select the best IT support company, you will have greater ease during the whole process. It is essential that you cover all the areas that may worry you so as to ensure that services are professionally and timely delivered. There are some points that are worth some considerations before an IT support company is appointed and they include:

Testimonials

This is a very easy method, especially when you know of someone who has used services from the company in the past and had a great experience. A testimonial may not be able to guarantee that the services that you receive will be similar to those of the past. However, it is much better to use this path as opposed to choosing a company randomly.

The physical address of the company

A company with a good reputation always has a physical address where their main business is operating in. It is important to exercise caution and don’t trust anyone working from home. When you know a physical address, you will breathe easy as you know they will still be around even after the job is completed. When a business gets a fixed and professional premises, it shows just how committed the company is and that’s why they take some time to invest in their infrastructure that can have senior staff as well as technical support.

Web site

You need to take your time to go through the IT services website so as to get a feel of how professional they really are. This is the place where you will find all the services that they offer as well as their contact details in case you need it. Customer feedback and testimonials is something that you should look at too.

Pricing is just one of the factors worth considering

It is a good thing if you do find a great bargain deal. However, it is important to note that you always get that which you pay for and this is very true where IT companies are concerned. Smaller companies that are home based may be able to offer low prices but are you sure that the services will be worth it? You should look at what your company would lose from not getting the important services against what the IT Company is offering at a price for services. Usually, bigger companies with more staff working under them charge much more as compared to the small companies.

Number of senior staff

Support is very important for any company. For an IT company, there are lots of technicalities that need to be handled. When there are lots of people working for the company, you can have the same guy assigned to handle your issues all the time and this comes with lots of advantages. The technician will be able to understand the network as well as the software setup and so noting exactly where the problem lies after he fixes the first time becomes very easy. This means you get faster services than would have been possible if you are assigned a new person every time.

The world has become highly computerized and this is why we need IT support from time to time to handle some of the technicalities that require an expert’s hand. Look at the services you require and match them up with what a company has to offer and you will be on your way to achieving great heights.

Understanding Block Chain Technology

In recent times, block chain technology redefined the Internet and led to the emergence of a new type of internet where digital information is distributed without copying. The technology was conceived and devised primarily for crypto currencies, digital currencies like the Bit-coin. In contemporary times, bit-coin is referred to as the digital gold and the total value of this currency is close to about 9 billion US dollars. Block chain technology can make other type of digital values. The functioning of the technology is encapsulated and therefore the user can use it without having to know it in detail. However, it is always recommended to have a basic notion about the technology in context before using it as this adequately simplifies the use.

The functioning of the technology is pretty much encapsulated implying that there is no need to know about the working of the block chain technology in detail, a fundamental idea about the working of the technology is more than sufficient for individuals using it. In simpler terms, this technology can be defined as a digital ledger of commercial transactions which is incorruptible and can be programmed to record not just the financial transactions but anything which has value associated with it.

Information stored as part of the technology in context is quite similar to the same in a spread sheet or any distributed database. Just as a spread sheet containing values can be regularly updated, the block chain too can be updated from time to time. The records stored using the block chain technology are not kept in a private location, instead, such data source are kept in public domain so that they can be verified on a timely basis. Using such a technology, the information is not held by any centralized servers instead they are stored in several database servers across millions of workstations, computers that are connected to the internet. It is because of this that the block chain data cannot be hacked or corrupted.

Since in this technology the blocks of information can be assessed across more than one point in the network therefore it cannot be controlled by a single entity. Since there are multiple copies of block chain information available across networks therefore such technologies do not have a single point of failure. Other facets of this technology are that it is transparent, incorruptible, very much decentralized because the information related to the technology in context is stored in several host machines across the network and all these factors contribute to making the block chain network highly secure.

All About Data

Information Technology-‘IT’-is expensive. Every CEO, CFO, COO-virtually every manager at every level of every company in existence knows that irrefutable fact. It’s usually a major line item in every company budget.

For the IT department, managing the company technical capability means first finding a product that meets whatever need the company has come up with (and sometimes the odd whim of a single manager), then the purchase, installation, training, and maintenance of the hardware, software, networks, and databases that go along with the tool. Then you get to worry about interfaces between the tools, reports, security (both internal and external), and the wonderful, irrepressible, eccentric, and oftentimes destruction habits of the end users even after you’ve provided complete end-to-end training on the new product.

Then someone changes their mind and you get to do it all over again.

I personally believe every metropolitan area should have a private asylum for IT managers. This facility should come complete with areas for the temporarily insane to conduct personal and violent destruction of computer hardware, provide them a setup of punching bags made to look like ignorant (not stupid, just unrealistic) company officers, and an additional stomping ground full of dummies made to resemble a variety of knuckle-headed end users. And a special place of hell for hackers…

Such a facility would be filled to capacity at all times.

The madness doesn’t stop with the IT department. For other company managers, IT changes means hours or days of training, downtime and loss of productivity that comes with IT problems, failures, or system upgrades.

For sales staff, an IT glitch can mean lost opportunity, loss of revenue, and a less than stellar image of the company that can stay in a customer’s mind for years. Salesmen may never overcome a bad customer experience generated by failed IT. The phrase “the computer is your friend” is not widely spoken among sales people.

But IT is a necessary evil, isn’t it? What company could function without it?

Well, it is necessary. Even an enterprising young man or woman entering the workforce for the first time mowing yards needs a way for customers to reach him/her, a way to manage a schedule, maybe even a way to track who has paid their bill.
But does IT have to be evil?

What if the evil-ness comes because we’re trying to solve the wrong problem with the IT? We’re trying to force a round peg into a square hole, imagining IT can solve our problem without actually identifying what the problem is? We buy computers, networks, communications, and all sorts of things to do one thing-capture and manage information. A simple fact that we inherently all know but gloss over-it isn’t the IT that’s important, it’s what’s traversing that IT.

It’s about the data. Those little bits and bytes that make up characters that make up data elements that coalesce into information that gives us knowledge that further transforms into intelligence that can be used and acted upon.

It’s about the data. Yet we chase the tools driving the data.

But wait (you say)! We have our databases. That’s part of the IT. That’s where our information is stored. We need the IT to get to our information. It’s OUR information.
Well, yes… sort of.

But not really. Your company does store data into the databases related to your company, usually inside a proprietary database that is part and parcel to the software you’ve purchased. Most-or at least a great deal-of that data is duplicated in other systems, some internal to your company, but most certainly in some other external system. And setting up a database is hard work, what with getting items identified, parsed, moved into the proper fields, verified, and such. It takes time and manpower which translates into dollars spent.

And how easy is it to get it back out of that database once you’ve decided to move on to the next cool IT product? How many IT managers work an exit strategy at the same time they’re developing their acquisition strategy? If you buy a proprietary product, do you know what data rights you have and how you’ll get out of that product when the time comes? Because it will. (By the way, the answer is generally ‘no’-it’s hard enough to get the product up and running while telling your vendor and company leadership that you’re already planning (and spending funds on) its demise.)

OK, so who has the data (where is your database actually located and who manages it? Who has access? Who owns it (don’t make an assumption here))? How will it be delivered back so you can move it to a competitor’s platform? Do you need to buy a proprietary (meaning, expensive) tool to extract the data? Who handles problems? Who maintains the documentation over the years so you actually know what that database looks like and exactly what each element means (because that changes too)? This borders on the geeky but X may not always and forever mean X, or maybe now its X+2. Maybe X is now alpha-numeric whereas it started out as numeric only. This information is absolutely vital-what changed and when? Without that documentation, you have no way of knowing if your data is complete, if its actually correct or if its been corrupted.

Love your database managers.

So, back to who actually owns the data. Even if your imminently wise IT manager has the bases covered as far as database ownership and all, do you actually own the data elements?

No. You own the intelligence that comes from using the data, and any subsequent storage and retrieval of that intelligence, but you don’t really get to decide the data elements that comprise that intelligence.

For instance, the US Social Security number. The US Government owns it-its structure and rules, and the content assigned per individual. Your company has no say in the matter. It can, however, be used in multiple ways. Some IT systems use all nine characters-with or without the dashes-while some only keep the last four, six, or seven. Other countries have personal identification numbers that look nothing like the US SSN. Now what?

How long is a ‘name’ and who gets to decide what it looks like (no system I’m aware of could capture the symbol Prince used for a while)? How long can a name be? What special characters are allowed? How many names can one person have (first/last/middle or 6-7 names, Aliases, Previously Known As)?

Within countries, some will say the government owns much of the other personally identifiable information (generally abbreviated to “PII”) for that country (probably the US among them). There may even be some international consortium that believe they ‘own’ data related to their field of expertise (but I bet there’s other consortium that would disagree with that position).

We could go on. The point is that no one ‘owns’ a data element, at least nothing that’s agreed upon globally, and that’s a problem.

Why?

Because we are global creatures living as members of a global environment. No man (or country) is an island. Data travels around our world at the speed of thought through social media and interconnected systems. It’s perpetual-once a ‘thought’ is out there, it’s out there for good because somewhere it’s been captured by an IT ‘system’.

Data is accessible from virtually anywhere and we can learn something about anything with a few key strokes (although we have no way of knowing the veracity of what we find).

So, stuff is out there in a plethora of forms, some of it is correct, some of it isn’t, and you need special tools to get much of it.

How do we know what we know? Personally, I think this era will eventually be known as the Second Dark Ages because we don’t know what we know and have no way to capture (into perpetuity) our knowledge. Or the trail of emails, notes, memos, etc that tells how we came to that knowledge, why we made that decision, why that particular path was chosen, etc.

PCs, laptops, mobile devices contain a wealth of information that belongs to an individual or often, a company or organization. When that device goes to the Great Recycling Bin in the Sky, usually through a fried hard drive which makes the data it contains inaccessible, all that data is lost. Fade to black.

I attended a lecture once that said in 1900 human knowledge was doubling every 50 years. In 1950 it was every 25 years, in 1998 (when I heard this) it was every 10 years, and by 2020 it would be every 72 days. Say what? How do we capture that? How do we know what we know when it’s all captured in disparate databases, disparate devices, in different forms?

How on Earth do we manage all of this data/information/knowledge/intelligence?
We need help. We need the computers to help us. As in, Artificial Intelligence (AI). AI could help us make sense of all of it, except the ‘all of it’ is scattered and parsed all over the world without any standard form or organizational structure.

So-what if we stopped driving the IT and instead drove the data (which is what we want anyway)? Just suppose we got control of our data, controlled it, and standardized it across the globe?

Imagine it-data element X looks like this, means this, is accessed by this nomenclature, owned (controlled) by this organization and (maybe) even updated this way. IT could do whatever it wanted with it as long as it didn’t change the structure of the element!

It wouldn’t matter what IT tool we used-whatever suited our needs and budget-because our data was stand-alone and controlled like the Borg-collective. IT cannot change the structure or meaning of the data. Resistance is futile. Companies wouldn’t have to spend millions of dollars defining and documenting their database because it would be standardized. They would need only to define the data elements they’re interested in. A vendor developing a new IT tool wouldn’t have to modify their tool for every customer-the data routines would be standard (think Services Oriented Architecture on steroids).

Wow! But how? How would be go about locking down data?

It would take a global endeavor, probably something under the United Nations.
Suppose there was a group that took care of everything associated with Personally Identifiable Information, another for education, another for health, another for accounting, etc etc etc…

It’s mind-boggling. There’d have to a group just to decide which group a piece of data should be sent to for management (is ‘checking account’ part of banking, accounting, personal, or business information?)

There would be arguments.

It’s been tried before of course, on much smaller scales by small organizations. None have been successful primarily because the group didn’t actually own the data. You can’t control what you don’t own.

I argue here, however, that it is no longer a matter of choice. If we are to avoid becoming that Second Dark Ages, we must find a way to get control of our data and it must be a universal endeavor.

Start small, think big, move fast…

Why not start with a group dedicated to individual attributes? Define those parameters. Establish the data structure. Define the tools to get to, update, and terminate that data. Then move out from there.

Imagine a world where companies or individuals can buy any IT product off the shelf-without the current bureaucracy associated with major purchases-whenever they want, with whatever bells and whistles they want, because the data it uses is universally structured. It would save billions of dollars (after the Great Data Structure in the Sky was set up anyway).

Imagine a world where data is separate from the IT driving it. You can have a database but you can’t change the structure or meaning. Some elements may be un-updateable except by an authorized group (e.g., source data like date of birth) and a master version captured at whatever level deemed appropriate-maybe national then synchronized with a world-wide repository. You can coalesce the data into information and intelligence that (may) become another piece of information and captured it its own right but you do NOT update the ‘truth’ data element.

Yes, I know-take two aspirin and think it through… it will hurt the gray matter. It’s easier to chase the IT than get a handle on the data, which is why we go that route. But we have to start.

Benefit of 3D Printing

My father’s dream was to build his own house.

However, building a house in India was an incredibly complicated affair.

First, you had to buy a plot of land. Next, you had to get an allocation from the government for all the materials you needed to make a house – cement, steel rebar, piping, you name it.

It was all in short supply because of India’s socialist economy, along with a quota system to dole out the limited quantities then available. And then to get your water, sewage and electricity connected… that was yet another ordeal.

Simply getting all this together was an enormous effort that took years. Of course, there was an easier way…

You hired a “fixer.”

In India, and anywhere else with a dysfunctional bureaucracy, a fixer is someone who knows all the right people. He greases someone’s palm over here and trades a favor with someone else over there to get things done.

With a fixer, what might have taken two or three years instead took just a few months.

Still, with all that, the house my father built took years to finish. However, today, a new technology is emerging that can shrink build times and make constructing a home much cheaper.

The Incredible Costs of Rebuilding

That technology is 3-D printing.

And this technology may suddenly become mainstream because of the incredible damage to housing that Hurricanes Harvey, Irma and Maria have done in 2017.

Harvey is estimated to have completely destroyed 12,700 homes.

Irma is estimated to have destroyed 25% of all homes in the Florida Keys.

Maria is estimated to have caused damage worth as much as $30 billion across the Caribbean.

Dominica, an island that I’ve been to go hiking and canyoning, experienced a near 100% loss of houses and buildings. It’s unlikely that Dominica can afford to reconstruct itself using the old-fashioned, traditional way of building houses and buildings. It would cost too much money, and it would take too long.

However, Cazza, a 3-D printing company, could have a solution.

Using Cazza’s X1 robot, 3-D printed buildings like houses, villas, shelters, warehouses and commercial buildings can go up in as little as one week.

Cazza believes that using its 3-D printing technology will save as much 40% on the old, traditional ways of building.

That’s a $20,000 savings on a house that costs $50,000 to put up. And remember, the 3-D printed model gets you your house in a week instead of months or years.

The current estimate for the still-incomplete hurricane season is already as much as $340 billion.

If you assume that 30% of this damage is destroyed homes and buildings, implementing 3-D printing technology like the Cazza X1 to rebuild will save as much as $40.8 billion. That’s a big deal.

This is why I’m watching the 3-D printing technology used on homes and buildings carefully. Because, in time, the techniques used to rebuild from disasters are also going to be used to make regular homes less expensive too. And the company that makes the 3-D printing technology is going to have a stock that soars for years.

All About Cloud Computing

Cloud computing would be a clever and efficient way of sharing software systems through technology in the cloud rather than have individual copies of everything. Simply put, we do possess the Facebook app but simply access it online.

It is quite certain that computer networks and the wonders of the internet have permeated into our bones when most of us live, eat, study and work computers. Each day seems to bring something startlingly new in the fabulous world of technology whether it applies to education, smartphones, cameras or automobiles. Space technology, medicine, and weapons are all in the race to elevate their systems by means of superior technology.

Usage of cloud computing

Can you feel the enormous weight of all that Aadhaar or census data that deals with over a billion people? Can hundreds, thousands of computers handle all that information? While it is true that hard disks can contain thousands of books, where is it all going to end?

It is a profound mistake to think that the earth’s resources would never end. Maybe our ancestors thought that way or could not imagine the extent we would use up all the resources. The time will come one day when we would have used up all the natural resources that now exist and seem infinite like the sun. The sun itself would burn out and life on earth would cease! Yet that day is too far away to start worrying now!

Like the way we use the electricity grid, paying for what we consume, the cloud contains all the elements that are metered and payment is made according to the extent of usage. There is no need to replicate software and extensive hardware devices with each user. Costs thus get shared like using pool vehicles instead of the five of them driving to work in separate vehicles!

The cloud facilities could be a private cloud completely owned by one organization for its own use, a public service provides facilities over a network or a hybrid cloud contains several kinds of services.

Have you heard about IaaS, SaaS, PaaS and UCaaS? The “S”, at the end stand for services and the initial letters stand for Information, Software, Platform and Unified Communications. A range of services are dispensed from the cloud where the organization and management is left in the hands of the Cloud that merely dispenses. The users would access the services through smartphones or laptops just like the present and sign up the services they wish to use just like we opt for the channels we want with the cable operator.

Benefits of cloud computing

The experts believe that such an arrangement would bring tremendous benefits as compared to the present. The greatest advantage perhaps is security though we may be worried about sensitive data. The advantages of a large organization are many in terms of a shared strength like an army of people and systems.

The freedom of accessing the cloud from any point on the earth through the browser is a mighty advantage and it does not really matter what device you are working from. Costs and maintenance become simpler because you have centralized systems and software that avoids unnecessary duplication with each user.

Performance, productivity, and reliability are ever so much enhanced under a large umbrella of experts and the user has little to worry about except to connect and work or play. Three cheers for the cloud computing of the future!

Benefit of Fiber Optic Cables

Fiber optic cable versus wire transmission differences boil down to the quickness of photons versus the speed of electrons. While fiber optic cables do not travel around at the velocity of light, they are very close- just about 31 percent slower. Here are more benefits.

Security

Would it be that hackers can access business cable with relative ease, because of cable tapping or various other simple methodologies? The only method to penetrate fiber-optic is to cut the fibers physically, that may cause the transmission to disappear. Fiber-optic cable is one of the powerful methods to improve your company’s safety against cyber criminal offenses.

Fiber Optic Transmission Has Low Attenuation

When traveling in a long range, fibers optic cables encounter less signal damage than copper cables. That is known as low attenuation. Copper connections can merely transmit information up to 9,328 ft due to power decline, whereas fiber cables can travel and between 984.2 ft to 24.8 miles.

HD Video Support

For a lot of companies, training video and teleconferencing are essential tools for employee training, advertising, and product sales. With a fiber-optic system, many companies can easily boost their investment in video conferencing as one of the best long lasting business tool without sacrificing bandwidth. Research indicates that more investment in video conferencing saves companies thousands every year and even month, especially if you can cut out too much business travel.

Resistance to Disturbance

Copper cable is usually delicate to electromagnetic interference, which may be due to the closeness of weighty equipment. PMMA Fiber cables do not degrade because of electromagnetic interference. If your company shares a telecoms space with other businesses, fiber-optic can easily protect your connection from disappearing if the other organizations are employing equipment that can hinder your interconnection in the same space.

Fiber Cables Are Resistant To Electromagnetic Intrusion

Copper wires, if not installed properly, will create electromagnetic currents that may hinder other cables and wreak the system on the network. Fiber cables, unlike real other cables, usually do have electromagnetic currents.

Symmetric Speed

Symmetric speed is a term utilized to refer to same upload and down load rates on a connection. With fiber cable connection, your employees can reap the benefits of same upload and download speed.

Is symmetric speed necessary? It’s definitely not critical for business procedures, but it’s helpful. Improvements in symmetrical speed are advancements that reflect how networks are used. Today’s employees can upload online video content, files, or make calls as when they work unlike what people used to do decades ago. Symmetric speed allows users to accommodate heavy demands on uploads and downloads simultaneously via their data connection.

Fire Safe

An additional advantage of PMMA fiber optic cables is they are not really a fire risk. This can also be related to the same reason the cables do not make EM.

Fiber Cables usually do not break easily

This implies that you shall not have to worry about replacing them as frequently as copper cables. Though the fiber is made from glass, copper wires are more susceptible to damage than fiber cables are

When Technology Made Up Your Life

Technology Importance:
Technology has made immense advancements over the years. It has helped us in many ways. In our daily lives there is not a single thing that does not involve the use of technology. It is just impossible to avoid the impact of technology, whether it is positive or negative. Technology has proven that we cannot ignore the ease it brings to our lives. Without technology our lives would be really difficult. We have become so accustomed to using technological advancements that at times we don’t even realize how dependent we are.

Education:
The advancements in technology have helped us in every field of life especially science. It has also helped students in a lot of ways. The internet has tons of information about everything. Service projects are available on the internet to help students in their coursework. Online learning programs and online libraries are the main sources that catch a student’s attention. Almost all the universities around the world are running online degree programmes for their off-campus students. Students, teachers and researchers have access to all sorts of data to analyse, interpret and utilize it.

Health
Medical science has found cure for almost all those diseases that were incurable a few decades ago. Numerous lives have been saved since the discovery of antibiotics and other medicines. The miracles medical science has achieved range from vaccines to stem cell production. The list is ongoing and we cannot be thankful enough to the medical science for immense ease it has brought in our lives by saving us from countless maladies spread around us.

Entertainment
Technology provides us with plenty of ways to occupy our time. Kids and teens especially are into the trend of playing games on computers, laptops or even smart phones. Radio was the first invention that aired various programmes for listeners ranging from music, news, plays etc. This led on to invention of TV which still remains as one of the popular ways of spending your time. It not only entertains us but also provides us with the latest news. The variety of programmes on different TV channels is more than enough to keep people occupied.

In the past people used to have cassette tapes or CDs in order to listen to music. Today’s portable music players have made it easier for people to listen to music. The sources of entertainment through electronic and print media are endless. Print media has become more advanced and printing of books and other informational material has become easier, faster and cheaper.

Communication
In our daily lives people hardly find time to talk with their loved ones in person. Technology has solved this issue by connecting people with their relatives and loved ones across the globe. We can share our daily activities with our friends by using social media. Texting, e-mailing and calling have revolutionized the way we communicate. There are countless apps that are used by innumerable people to stay in touch with their friends and family.

Common Mistakes While Installing Network Cables

If you have shifted to a new home or soon shifting your workplace to a new building, you are probably already thinking of installing network cables as one of the first tasks. But wait, till you read further. Whoever said that “learn from the mistakes of others”, must have been a genius.” So, let’s just utilize the suggestion practically and have a look at the most common mistakes made while installing network cables, to avoid doing it ourselves.

Using Separate Cables for Voice & Data

If you ask any relevant person about their biggest mistake while network cabling their home or workplace, you will get to hear it as one of the most common answers. Earlier, twisted cabling was comparatively expensive so people opted different cabling for both. But now the cabling itself isn’t that much of an expense.

In fact, latest phone systems require data-level cabling so you don’t have a choice but to get suitable cabling that supports both. Not thinking about the possibility of you installing any of those phone systems in future describes another major mistake.

Compromising with the Quality/Version

A network connection isn’t temporary. With the ongoing advancements, it would be a grave mistake to still choose the cheapest and most basic cables available. Although it isn’t mandatory to go with totally high-end cables, it would be a smart option to go for quality Ethernet cables capable of facilitating a robust connection.

That being said, you might not need a 10gbps speed right now, but let’s just say 100Mbps should also not be what you settle with.

Cabling Parallel to Electric Cables

Setting up your networking cables parallel to electric cables can disturb the magnetic field of the cables and cause major disruptions. You can also end up losing transmission at times or very slow communication between the wires taking place.

So deciding your cable path, therefore, is very important rather than just going randomly about it.

Cutting Cables Too Short/Long

You certainly don’t want your cables stretching hard to reach out to their destination connectors. So, the best way to go about it is to measure the exact distance of your connections and take just about a few inches more for ease. Don’t cut them too short.

If short cables are a problem, too long ones are a bigger problem. Every cable has its capacity and if you choose lengths longer than their competence, you might suffer with slower and frequently disruptive connections. So, consider distance limitations of your cables seriously.

Not Testing Cables Simultaneously

Do you want to end up scratching your heads when your network cabling is done but you end up realizing that there had been a few faulty cables that refuse to aid the connection? We know the answer. So, the best way to avoid this is to go about checking every single cable while establishing the connection.

It might sound to be a time-consuming task given you are paying a hefty amount for labor but trust us it’s better than paying them(much more) later for fixing a brand new connection.

Other than that, don’t go about a haphazard connection. Plan in advance and go for cable organization as it comes really handy in long run and can save a lot on labor costs for maintenance. Additionally, you should also take care of the codes and ordinances of your region.

Looking from utility point of view, network cabling these days is as much an important job as electric wiring. So, it is important to get good cables, equipments, and proficient labor to avoid regretting later. To ensure everything goes smoothly, it’s a good idea to hire professionals for the job rather than planning it out yourself.

How To Choose Notebook or Tablet

Laptop PC or Tablet: Which is appropriate for you? We will help manage you through all the different preferences and limitations of the two, Laptop PCs and tablets to help you to settle on a cognizant choice on your next gadget. So check out our guide to choose between laptop computer or tablet, and go forth as an informed consumer into yonder bloated market. By and large, tablets are 7-11 inches, while portable laptops are 12-16 inches, which right away suggests that the previous will be easier to bear. On the off chance that you are going for one of the bigger tablets or a cross breed like the 18.4-inch Galaxy View Tablet, you may like to consider whether getting a small laptop computer is going to work out better for you.

Portable PCs are better when thought about than tablets, primarily due to their size and the capacity to store more hardware. Multitasking is less demanding on a Laptop PC than on a tablet, in spite of the fact that tablets are progressively offering better multitasking arrangements. The top of the line iPad Pro has 4GB of Memory and capacity choices beginning at 32GB, while Google’s new leader Pixel C tablet has 3GB Memory, with interior stockpiling beginning at 32GB. While you will pay in any event $679 for the iPad Professional, you may get a Lenovo Yoga 500 convertible portable workstation 4GB Memory with the choice to expand that to 8GB, and 1TB of inside capacity for $399.

In the event that you are searching for something that will empower you to complete more top to bottom errands requiring different projects, at that point a Laptop PC in all likelihood is the approach, however a portion of the higher end tablets including the iPad Pro and Surface Pro 4, will offer superior execution over less complex tablets like the Amazon Fire Hi-def 8 or iPad Air range. It is only a question of not having an, or as large a screen, which enables the smaller gadgets to keep up a more drawn out charge than a laptop. In any case, more expensive Laptop PC’s can last longer, however, once more, you should pay more to get a Laptop PC that holds its charge nearly as long as a tablet. Laptop PC’s offer less battery life than tablets, so in the event that you are searching for something which doesn’t have to complete complicated assignments, at that point a tablet may be the best decision. In general, an ultimate conclusion will significantly rely upon your particular needs. For some, the tablet will get the job done, and for others, the Laptop PC is an unquestionable requirement.