jameskobielus11
Q7: How does the use of on-premises or cloud infrastructure differ at each stage in the ML model lifecycle? What are the key considerations for AI / ML infrastructure? https://www.crowdchat.net/s/85vj9
Frederic Van Haren
Model training normally requires higher-end, more recent/expensive, GPU resources. Trying to keep pace with the change in GPU improvements is costly.
Storage Godfather (HPEStorageGuy)
Thanks for avoiding a TwitterPiss by correctly using on-premises.
Patrick Osborne
to the end user or data scientist, it should be transparent
Frederic Van Haren
Inference requires a different approach than training a model
Patrick Osborne
regardless of on or off prem, the infrastructure should be optimized to the workload, flexible to scale and consumed as a service
Victor Ghadban
On premises traditionally doesn’t have the scalability or flexibility that cloud has. For quick spin up of resources and tools. A cloud native built tool can alleviate that issue and give you a cloud like experience on premises.
NandaVijaydev
Data can be anywhere. Sometimes it makes sense to train the model where your data is and deploy where the consumers are. This is different in different companies. Having that flexibility helps with a successful deployment
Patrick Osborne
AI/ML/DL/GPU aaS
Abdul Matheen Raza
For many enterprises a cloud-only deployment is often neither a viable option nor a panacea. Many will continue to use on-prem for certain workloads due to data gravity, security, or regulatory requirements.
NandaVijaydev
Same applies to edge locations. Your models are better served if deployed on edge infrastructure
Frederic Van Haren
The concept of containers (and K8s) keeps the developer's focus on the problem they are trying to solve. As opposed to solving infrastructure problems.
Abdul Matheen Raza
A more viable solution will have the ability to be deployed either on-premises, in the public cloud, or in a hybrid cloud.
Jason Schroedl
Training and inference happens where the data is managed - data has gravity. It’s important to have the ability to develop, manage, and deploy ML models on any infrastructure - whether on-prem, in the public cloud, or hybrid.
jameskobielus
AI/ML infrastructure is more storage-intensive in the training workflow, and more compute-intensive in the deployment and inferencing workflow.